Thanks to visit codestin.com
Credit goes to github.com

Skip to content

adrianaprotondo/CerebellarMotorLearning_2

Repository files navigation

Cerebellar Motor Learning

motor control system

This is a simple implementation of the motor control system above. A feedforward (cerebellar cortex) and feedback system (motor cortex) control a plant (musculoskeletal system). The task is for the output of the plant $y(t)$ to match a reference trajectory $r(t)$. The cerebellar cortex module can adapt its weights online, while controlling the plant, to improve its internal model of the plant.

The main goal is to evaluate learning performance for different network architectures of the cerebellar cortex module. We test different learning rules for the update of the the cerebellar-like network output weights.

It is 'plug and play' in the sense that you can create, swap, and connect components easily. You can implement different forms of plants, reference trajectories, and controllers. It's pretty easy to create your own components by adding to the types.jl and systemComponents_functions.jl page.

The default implementations are as follows:

  • reference trajectory is a sum of sinusoidals with different frequencies (band limited) and phase shifts.
  • motor cortex is a PID controller
  • cerebellar cortex is a feedforward neural network with one hidden layer with tanh activation function and a single linear output. The input weights are static and the output weights are adaptable.
  • the musculoskeletal system is a linear plant.

To get started we recommend going over the notebook notebooks/LMS_test.ipynb.

Running

The whole project is implemented in Julia. Julia allows for rapid calculation of gradients through the whole dynamic system.

This project is an 'unregistered package'. To run the project

  1. Download Julia 1.6. Note that there might be compatibility problems running the code in newer Julia versions.
  2. cd to the folder where you want to clone the project.
  3. clone the repo on your device
git init
git clone https://github.com/adrianaprotondo/CerebellarMotorLearning_2.git
  1. Running the code. There are multiple ways to run the code. Remember that in Julia, the first time you use a package it takes a really long time in Julia (compiling). Same for functions and plotting. But the second time is really quick.

Running notebooks .ipynb

To run notebooks found in notebooks/ folder.

With VSCode

  1. Open or download VSCode.
  2. Install the Julia client
  3. Add the Jupyter extension to VSCode.
  4. Open the project folder CerebellarMotorLearning_2
  5. Open one of the .ipynb in notebooks/
  6. Select the Julia-1.6 kernel
  7. You can now run the cells could take up to 10mins for the first compilation.

On a terminal

  1. Go to the project folder CerebellarMotorLearning_2
  2. Open a julia terminal by typing julia or julia-1.6
  3. Make sure you add and use Revise.jl before doing anything else.
    ]
    add Revise
    
  4. Install IJulia
    ] 
    add IJulia
    
  5. If you already have Python/Jupyter installed on your machine, you can then launch the notebook server the usual way by running jupyter notebook in the terminal. otherwise type the following on the Julia terminal
    using IJulia
    notebook()  
    
  6. Navigate to open the notebook and select the Julia-1.6 kernel

Running scripts

To run scripts like scripts/testSize_static_Ls_SS_ssFromMin_analyse.jl

In VSCode

In VSCode you can directly open one of the scripts and run Julia:Execute active File in new REPL

In Julia terminal

In a julia terminal, ']' switches to the package manager, and ';' switches to the shell. If you switch to the package manager and add Revise

]
add Revise

then activate the project

]
activate . 
update

then you install and activate the environment for this package. Then type:

using CerebellarMotorLearning 

Then switch to shell (with ;), and cd to the scripts. Again, you can activate the environment in that folder, with

activate .
update

At this point you can type include("xxx.jl") to run a script. Simultaneously, (thanks to Revise.jl), you can alter functions and code in the FeedbackLearning package, and that will immediately be reflected in your code that calls it.

Performance tips

  • Only compatible with v1.6. Earlier versions raise errors with Modelingtoolkit.jl and with jld2 loading variables.
  • Important to keep Flux under v0.12.9. Newer versions raise error with Flux.destructure() and ModellingToolkit.jl

Work in process

  • Adding the option of random reference trajectories $r(t)$ generated from a filtered Ornstein-Uhlenbeck process.
  • Adding the option of a RNN implementation for the plant.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published