Thanks to visit codestin.com
Credit goes to github.com

Skip to content

The project SamuKnows can distinguish the sub-processes taking place in the sensory input. This project is merged from the SamuBrain and the SamuVocab projects.

License

Notifications You must be signed in to change notification settings

nbatfai/SamuKnows

Repository files navigation

SamuKnows

The project SamuKnows can distinguish the sub-processes taking place in the sensory input. This project is merged from the SamuBrain and the SamuVocab projects.

SamuKnows, exp. 8, cognitive mental organs: MPU (Mental Processing Unit), acquiring higher-order knowledge

SamuBrain

Currently I am working on a manuscript titled "Samu in His Prenatal Development" where I want to establish a definition of a mathematical machine for learning. It is for this reason that I have made various experiments on the subject.

The project called SamuBrain is an implementation of a version of the definition in question. In this experiment, I have been investigating the possibility of developing a "cognitive mental organ" which is called Mental Processing Unit (or briefly MPU) in the terminology of the sources of this project.

An MPU consisting of two lattices, one input and one output lattice. The input lattice (called reality) represents the perception of the agent. Each cell of the output lattice (called Samu's predictions) is equipped with a COP-based SAMU engine to predict the next state of the corresponding input cell. Three different inputs are shown to the agent in the experiment:

  1. 5 gliders move in the input lattice in accordance with Conway's Game of Life (https://github.com/nbatfai/SamuLife)
  2. 9 simple "pictures" are shown (https://github.com/nbatfai/SamuStroop)
  3. a simple "film" is shown (https://github.com/nbatfai/SamuMovie)

In the project SamuBrain, the agent must learn and recognize these complex patterns. It is shown in video at https://youtu.be/_W0Ep2HpJSQ

SamuKnows

Here the video of SamuMovie is divided into its three subcomponents: the house, the car and the man. First I teach the agent to recognize these standalone subcomponents. Then the agent should detect the house and the moving car and man in the original SamuMovie video.

Usage

vSamuBrain

Using the selection mechanism of SamuVocab for detecting sub-processes looks like a dead-end so I return to the original SamuBrain: https://youtu.be/MLOeNNqd2Nw

git clone https://github.com/nbatfai/SamuKnows.git
cd SamuKnows/
git checkout vSamuBrain
~/Qt/5.5/gcc_64/bin/qmake SamuLife.pro
make
./SamuKnows 2>out
tail -f out|grep "HIGHER-ORDER NOTION MONITOR"
tail -f out|grep SENSITIZATION

Previous other experiments

Samu (Nahshon) http://arxiv.org/abs/1511.02889, https://github.com/nbatfai/nahshon


SamuLife https://github.com/nbatfai/SamuLife, https://youtu.be/b60m__3I-UM

SamuMovie https://github.com/nbatfai/SamuMovie, https://youtu.be/XOPORbI1hz4

SamuStroop https://github.com/nbatfai/SamuStroop, https://youtu.be/6elIla_bIrw, https://youtu.be/VujHHeYuzIk

SamuBrain https://github.com/nbatfai/SamuBrain

SamuCopy https://github.com/nbatfai/SamuCopy


SamuTicker https://github.com/nbatfai/SamuTicker

SamuVocab https://github.com/nbatfai/SamuVocab


SamuCam https://github.com/nbatfai/SamuCam

samucam1-nandi4

Robopsychology One https://github.com/nbatfai/Robopsychology

About

The project SamuKnows can distinguish the sub-processes taking place in the sensory input. This project is merged from the SamuBrain and the SamuVocab projects.

Resources

License

Stars

Watchers

Forks

Packages

No packages published