IIC-OSIC-TOOLS (Integrated Infrastructure for Collaborative Open Source IC Tools) is an all-in-one Docker/Podman container for open-source-based integrated circuit designs for analog and digital circuit flows. The CPU architectures x86_64/amd64 and aarch64/arm64 are natively supported based on Ubuntu 24.04 LTS (since release 2025.01). This collection of tools is curated by the Department for Integrated Circuits (ICD), Johannes Kepler University (JKU).
- IIC-OSIC-TOOLS
- Table of Contents
- 1. How to Use These Open-Source (and Free) IC Design Tools
- 2. Installed PDKs
- 3. Installed Tools
- 4. Quick Launch for Designers
- 5. Using the Container with
- 6. Support with Issues/Problems/Bugs
For great step-to-step instructions of installation and operation of our tool collection, please check out Kwantae Kim's Setting Up Open Source Tools with Docker!
It supports multiple modes of operation:
- Using a complete desktop environment (XFCE) in
Xvnc(a VNC server), either directly accessing it with a VNC client of your choice or the integrated noVNC server that runs in your browser. - Using a local X11 server and directly showing the application windows on your desktop.
- Using a Jupyter Notebook running inside the container, opened on the hosts browser.
- Using it as a development container in Visual Studio Code (or other IDEs)
Use the green Code button, and either download the zip file or do a
git clone --depth=1 https://github.com/iic-jku/iic-osic-tools.gitSee instructions on how to do this in the section Quick Launch for Designers further down in this README.
Enter the directory of this repository on your computer, and use one of the methods described in the section Quick Launch for Designers to start up and run a Docker container based on our image. The easiest way is probably to use the VNC mode.
If you do this the first time, or we have pushed an updated image to DockerHub, this can take a while since the image is pulled (loaded) automatically from DockerHub. Since this image is ca. 4Â GB, this takes time, depending on your internet speed. Please note that this compressed image will be extracted on your drive, so please provide at least 20Â GB of free drive space. If, after a while, the consumed space gets larger, this may be due to unused images piling up. In this case, delete old ones; please consult the internet for instructions on operating Docker.
If you know what you are doing and want full root access without a graphical interface, please use
./start_shell.shAs of the 2022.12 tag, the following open-source process-development kits (PDKs) are pre-installed, and the table shows how to switch by setting environment variables (you can do this per project by putting this into .designinit as explained below):
SkyWater Technologies sky130A |
|---|
export PDK=sky130A
export PDKPATH=$PDK_ROOT/$PDK
export STD_CELL_LIBRARY=sky130_fd_sc_hd
export SPICE_USERINIT_DIR=$PDKPATH/libs.tech/ngspice
export KLAYOUT_PATH=$PDKPATH/libs.tech/klayout:$PDKPATH/libs.tech/klayout/techGlobal Foundries gf180mcuC |
|---|
export PDK=gf180mcuC
export PDKPATH=$PDK_ROOT/$PDK
export STD_CELL_LIBRARY=gf180mcu_fd_sc_mcu7t5v0
export SPICE_USERINIT_DIR=$PDKPATH/libs.tech/ngspice
export KLAYOUT_PATH=$PDKPATH/libs.tech/klayout:$PDKPATH/libs.tech/klayout/techIHP Microelectronics ihp-sg13g2 |
|---|
export PDK=ihp-sg13g2
export PDKPATH=$PDK_ROOT/$PDK
export STD_CELL_LIBRARY=sg13g2_stdcell
export SPICE_USERINIT_DIR=$PDKPATH/libs.tech/ngspice
export KLAYOUT_PATH=$PDKPATH/libs.tech/klayout:$PDKPATH/libs.tech/klayout/techProbably the best way to switch between PDKs is to use the command sak-pdk. When called without arguments a list of installed PDKs is shown. To e.g. switch to IHP enter
sak-pdk ihp-sg13g2or to switch to sky130A enter
sak-pdk sky130AMore options for selecting digital standard cell libraries are available; please check the PDK directories.
Below is a list of the current tools/PDKs already installed and ready to use:
- abc sequential logic synthesis and formal verification
- amaranth a Python-based HDL tool chain
- cace a Python-based circuit automatic characterization engine
- charlib a characterization library for standard cells
- ciel version manager (and builder) for open-source PDKs
- cocotb simulation library for writing VHDL and Verilog test benches in Python
- covered Verilog code coverage
- cvc circuit validity checker (ERC)
- edalize Python abstraction library for EDA tools
- fault design-for-testing (DFT) solution
- fusesoc package manager and build tools for SoC
- gaw3-xschem waveform plot tool for
xschem - gds3d a 3D viewer for GDS files
- gdsfactory Python library for GDS generation
- gdspy Python module for the creation and manipulation of GDS files
- gf180mcu GlobalFoundries 180Â nm CMOS PDK
- ghdl-yosys-plugin VHDL-plugin for
yosys - ghdl VHDL simulator
- gtkwave waveform plot tool for digital simulation
- hdl21 analog hardware description library
- ihp-sg13g2 IHP Microelectronics 130Â nm SiGe:C BiCMOS PDK (partial PDK, not fully supported yet;
xschemandngspicesimulation works incl. PSP MOSFET model) - irsim switch-level digital simulator
- iverilog Verilog simulator
- kactus2 Kactus2 is a graphical editor for IP-XACT files, which are used to describe hardware components and their interfaces
- klayout-pex parasitic extraction for
klayout - klayout layout viewer and editor for GDS and OASIS
- lctime Characterization kit for CMOS cells
- libman design library manager to manage cells and views
- librelane successor of OpenLane(2), RTL2GDS flow scripts
- magic layout editor with DRC and PEX
- najaeda data structures and APIs for the development of post logic synthesis EDA algorithms
- netgen netlist comparison (LVS)
- ngspice SPICE analog and mixed-signal simulator, with OSDI support
- ngspyce Python bindings for
ngspice - nvc VHDL simulator and compiler
- open_pdks PDK setup scripts
- openems electromagnetic field solver using the EC-FDTD method
- openram OpenRAM Python library
- openroad RTL2GDS engine used by
librelane - opensta gate level static timing verifier
- openvaf Verilog-A compiler for device models
- osic-multitool collection of useful scripts and documentation
- padring padring generation tool
- pulp-tools PULP platform tools consisting of bender, verible, and sv2v
- pygmid Python version of the gm/Id starter kit from Boris Murmann
- pyopus simulation runner and optimization tool for analog circuits
- pyrtl collection of classes for pythonic RTL design
- pyspice interface
ngspiceandxycefrom Python - pyuvm Universal Verification Methodology implemented in Python (instead of SystemVerilog) using
cocotb - pyverilog Python toolkit for Verilog
- qflow collection of useful conversion tools
- qucs-s simulation environment with RF emphasis
- rggen Code generation tool for control and status registers
- risc-v toolchain GNU compiler toolchain for RISC-V cores
- riscv-pk RISC-V proxy kernel and bootloader
- schemdraw Python package for drawing electrical schematics
- siliconcompiler modular build system for hardware
- sky130 SkyWater Technologies 130Â nm CMOS PDK
- slang yosys plugin Slang-based plugin for
yosysfor SystemVerilog support - slang SystemVerilog parsing and translation (e.g. to Verilog)
- spicelib library to interact with SPICE-like simulators
- spike Spike RISC-V ISA simulator
- spyci analyze/plot
ngspice/xyceoutput data with Python - surelog SystemVerilog parser, elaborator, and UHDM compiler
- surfer waveform viewer with snappy usable interface and extensibility
- [vacask] a modern Verilog-A based analog circuit simulator
- verilator fast Verilog simulator
- veryl a modern hardware description language, based on SystemVerilog
- vlog2verilog Verilog file conversion
- vlsirtools interchange formats for chip design.
- xcircuit schematic editor
- xschem schematic editor
- xyce fast parallel SPICE simulator (incl.
xdmnetlist conversion tool) - yosys Verilog synthesis tool (with GHDL plugin for VHDL synthesis and Slang plugin for SystemVerilog synthesis), incl.
eqy(equivalence checker),sby(formal verification), andmcy(mutation coverage) - RF toolkit with FastHenry2, FasterCap, openEMS, and scikit-rf.
The tool versions used for librelane (and other tools) are documented in tool_metadata.yml. In addition to the EDA tools above, further valuable tools (like git) and editors (like gvim) are installed. If something useful is missing, please let us know!
Download and install Docker for your operating system:
Note for Linux: Do not run docker commands or the start scripts as root (sudo)! Follow the instructions in Post-installation steps for Linux
The following start scripts are intended as helper scripts for local or small-scale (single instance) deployment. Consider starting the containers with a custom start script if you need to run many instances.
All user data is persistently placed in the directory pointed to by the environment variable DESIGNS (the default is $HOME/eda/designs for Linux/macOS and %USERPROFILE%\eda\designs for Windows, respectively).
If a file .designinit is put in this directory, it is sourced last when starting the Docker environment. In this way, users can adapt settings to their needs.
This mode is recommended for remote operation on a separate server or if you prefer the convenience of a full desktop environment. To start it up, you can use (in a Bash/Unix shell):
./start_vnc.shOn Windows, you can use the equivalent batch script (if the defaults are acceptable, it can also be started by double-clicking in Explorer):
.\start_vnc.bat
You can now access the Desktop Environment through your browser (http://localhost). The default password is abc123.
Both scripts will use default settings, which you can tweak by settings shell variables (VARIABLE=default is shown):
DRY_RUN(unset by default); if set to any value (also0,false, etc.), the start scripts print all executed commands instead of running. Useful for debugging/testing or just creating "template commands" for unique setups.DESIGNS=$HOME/eda/designs(DESIGNS=%USERPROFILE%\eda\designsfor.bat) sets the directory that holds your design files. This directory is mounted into the container on/foss/designs.WEBSERVER_PORT=80sets the port on which the Docker daemon will map the webserver port of the container to be reachable from localhost and the outside world.0disables the mapping.VNC_PORT=5901sets the port on which the Docker daemon will map the VNC server port of the container to be reachable from localhost and the outside world. This is only required to access the UI with a different VNC client.0disabled the mapping.DOCKER_USER="hpretl"username for the Docker Hub repository from which the images are pulled. Usually, no change is required.DOCKER_IMAGE="iic-osic-tools"Docker Hub image name to pull. Usually, no change is required.DOCKER_TAG="latest"Docker Hub image tag. By default, it pulls the latest version; this might be handy to change if you want to match a specific version set.CONTAINER_USER=$(id -u)(the current user's ID,CONTAINER_USER=1000for.bat) The user ID (and also group ID) is especially important on Linux and macOS because those are the IDs used to write files in theDESIGNSdirectory. For debugging/testing, the user and group ID can be set to0to gain root access inside the container.CONTAINER_GROUP=$(id -g)(the current user's group ID,CONTAINER_GROUP=1000for.bat)CONTAINER_NAME="iic-osic-tools_xvnc_uid_"$(id -u)(attaches the executing user's ID to the name on Unix, or onlyCONTAINER_NAME="iic-osic-tools_xvncfor.bat) is the name that is assigned to the container for easy identification. It is used to identify if a container exists and is running.
To overwrite the default settings, see Overwriting Shell Variables
This mode is recommended if the container is run on the local machine. It is significantly faster than VNC (as it renders the graphics locally), is more lightweight (no complete desktop environment is running), and integrates with the desktop (copy-paste, etc.). To start the container, run the following:
./start_x.shor
.\start_x.bat
Attention macOS users: The X-server connection is automatically killed if there is a too-long idle period in the terminal (when this happens, it looks like a crash of the system). A workaround is to start a second terminal from the initial terminal that pops up when executing the start scripts ./start_x.sh or .\start_x.bat and then start htop in the initial terminal. In this way, there is an ongoing display activity in the initial terminal, and as a positive side effect, the usage of the machine can be monitored. We are looking for a better long-term solution.
Attention macOS users: Please disable the Enable VirtioFS accelerated directory sharing setting available as "Beta Setting," as this will cause issues accessing the mounted drives! However, enabling the VirtioFS general setting works in Docker >v4.15.0!
The following environment variables are used for configuration:
DRY_RUN(unset by default), if set to any value (also0,false, etc.), makes the start scripts print all executed commands instead of running. Useful for debugging/testing or just creating "template commands" for unique setups.DESIGNS=$HOME/eda/designs(DESIGNS=%USERPROFILE%\eda\designsfor.bat) sets the directory that holds your design files. This directory is mounted into the container on/foss/designs.DOCKER_USER="hpretl"username for the Docker Hub repository from which the images are pulled. Usually, no change is required.DOCKER_IMAGE="iic-osic-tools"Docker Hub image name to pull. Usually, no change is required.DOCKER_TAG="latest"Docker Hub image tag. By default, it pulls the latest version; this might be handy to change if you want to match a specific Version set.CONTAINER_USER=$(id -u)(the current user's ID,CONTAINER_USER=1000for.bat) The user ID (and also group ID) is especially important on Linux and macOS because those are the IDs used to write files in theDESIGNSdirectory.CONTAINER_GROUP=$(id -g)(the current user's group ID,CONTAINER_GROUP=1000for.bat)CONTAINER_NAME="iic-osic-tools_xserver_uid_"$(id -u)(attaches the executing user's ID to the name on Unix, or onlyCONTAINER_NAME="iic-osic-tools_xserverfor.bat) is the name that is assigned to the container for easy identification. It is used to identify if a container exists and is running.
For Windows, WSLg (the graphical subsystem for WSL) is used, which is provided by a socket file inside the container. The display number is :0.
For Mac, the X11 server is accessed through TCP (defaults to host.docker.internal:0, host.docker.internal resolves to the host's IP address inside the docker containers, :0 corresponds to display 0 which corresponds to TCP port 6000.).
Normally, it should not be necessary to modify these settings, but to control the server's address, you can set the following variable:
DISPis the environment variable that is copied into theDISPLAYvariable of the container.
For TCP based connections, access control might be modified. If the executable xauth is in PATH, the startup script automatically disables access control for localhost, so the X11 server is open for connections from the container. A warning will be shown if not, and you must disable access control.
For Linux, the local X11 server is accessed through a Unix socket. There are multiple variables to control:
XSOCK=/tmp/.X11-unixis typically the default location for the Unix sockets. The script will probe if it exists and, if yes, mount it into the container.DISPhas the same function as macOS and Windows. It is copied to the container'sDISPLAYvariable. If it is not set, the value ofDISPLAYfrom the host is copied.XAUTHdefines the file that holds the cookies for authentication through the socket. If it is unset, the host'sXAUTHORITYcontents are used. If those are unset too, it will use$HOME/.Xauthority.
The defaults for these variables are tested on native X11 servers, X2Go sessions, and Wayland. The script copies and modifies the cookie from the.Xauthority file into a separate, temporary file. This file is then mounted into the container.
Everything should be ready on Linux with a desktop environment / UI (this setup has been tested on X11 and XWayland). For Windows, WSL should be updated to the latest version to provide WSLg (No additional X-Server needs to be installed, and it should be readily available on Windows 10 (from Build 19044) and Windows 11). For macOS, the installation of an X11 server is typically required. Due to the common protocol, every X11-server should work, although the following are tested:
- For macOS: XQuartz Important: Please enable "Allow connections from network clients" in the XQuartz preferences [CMD+","], tab "Security"
It is strongly recommended enabling OpenGL:
- The
start_x.shscript will take care of that on macOS and set it according to configuration values. Only a manual restart of XQuartz is required after the script is run once (observe the output!).
There are multiple ways to configure the start scripts using Bash. Two of them are shown here. First, the variables can be set directly for each run of the script; they are not saved in the active session:
DESIGNS=/my/design/directory DOCKER_USERNAME=another_user ./start_x.shThe second variant is to set the variables in the current shell session (not persistent between shell restarts or shared between sessions):
export DESIGNS=/my/design/directory
export DOCKER_USERNAME=another_user
./start_x.shAs those variables are stored in your current shell session, you only have to set them once. After setting, you can directly run the scripts.
In CMD you can't set the variables directly when running the script. So for the .bat scripts, it is like the second variant for Bash scripts:
SET DESIGNS=\my\design\directory
SET DOCKER_USERNAME=another_user
.\start_x.batThis is a new usage mode, that might fit your needs. Devcontainers are a great way to provide a working build environment along your own project. It is supported by the devcontainer extension in Visual Studio Code.
Option 1: In Visual Studio, click the remote window icon on the left and then "Reopen in Container", "Add configuration to workspace". Enter "ghcr.io/iic-jku/iic-osic-tools/devcontainer" as template, choose the version of the container and add more features (probably not needed). It will then restart the IDE, download the image and start a terminal and mount the work folder into the image.
Option 2: Alternatively you can directly just create the configuration file .devcontainer/devcontainer.json:
{
"name": "IIC-OSIC-TOOLS",
"image": "ghcr.io/iic-jku/iic-osic-tools-devcontainer:2024.12"
}Either way, the great thing is that you can now commit the file to repository and all developers will be asked if they want to reopen their development in this container, all they need is Docker and VS Code.
The IIC-OSIC-Tools are meant to be beginner friendly. If you have limited knowledge of the tools involved (Docker, Podman, etc..), we suggest you follow 4. Quick Launch for Designers. For container experts, there is also support for other container engines and additional tools, see the subsections below.
Podman is a demonless, OCI compatible container engine, that supports rootless containers to contain privileges inside the container. Normal root containers are supported out of the box, the Docker-compatible CLI can be used with the start scripts without modification. Using rootless mode, we suggest using the user-namespace mode "keep-id". In this case, the host-user, launching the container, is copied to the container (same UID, GID, user and group name), preventing access issues between the container and mounted directories from the host. This can be achieved by using:
DOCKER_EXTRA_PARAMS="--userns=keep-id" ./start_<mode>.sh
It should be noted, that the rootless mode can't bind to ports below 1024. This means, for the VNC-mode, a different webserver port has to be selected, e.g.:
WEBSERVER_PORT=8080 DOCKER_EXTRA_PARAMS="--userns=keep-id" ./start_<mode>.sh
Distrobox is a fancy wrapper around Podman or Docker to create and start containers highly integrated with the hosts. Like the start_x scripts, Distrobox manages the forwarding of X11/Wayland to the container, but allows for even more tight integration, by also forwarding the users home directory, and seamlessly integration other services like the systemd journal, D-Bus etc...
Distrobox specifically mentions that its main focus lies on integration, and not on sandboxing and security.
The IIC-OSIC-Tools support the usage of Distrobox, even though the usage is slightly different, compared to the start scripts. Noteably, the /headless is not the in-container-user's home directory, and /foss/designs will not be mounted. But /home/<username> will have full access to the users home directory.
A IIC-OSIC-Tools Distrobox can be started and accessed with:
distrobox create -n iic-osic-tools -i hpretl/iic-osic-tools:latest
distrobox enter iic-osic-tools
We are open to your questions about this container and are very thankful for your input! If you run into a problem, and you are sure it is a bug, please let us know by following this routine:
- Take a look at the KNOWN_ISSUES and the RELEASE_NOTES. Both these files can include problems that we are already aware of and maybe include a workaround.
- Check the existing Issues on GitHub and see if the problem has been reported already. If yes, please participate in the discussion and help by further collecting information.
- Is the problem in connection with the container, or rather a problem with a specific tool? If it is the second, please also check out the sources of the tool and further contact the maintainer!
- To help us fix the problem, please open an issue on GitHub and report the error. Please give us as much information as possible without being verbose, so filter accordingly. It is also fine to open an issue with very little information, we will help you to narrow down the source of the error.
- Finally, if you can exactly know how to fix the reported error, we are also happy if you open a pull request with a fix!
Thank you for your cooperation!