Unity serialization with ScriptableObject does not support polymorphism/inheritance

Unfortunately Unity 3.5’s serialization mechanism for Assets based on ScriptableObject does not support polymorphism. If you store an object of a derived type B in a field of type A, you lose the actual type when serializing.

For instance, in the following piece of code

[Serializable]
public class SerializableA : ScriptableObject
{
    [Serializable]
    public class A
    {
        public virtual string S()
        {
            return "A";
        }
    }

    [Serializable]
    public class B : A
    {
        public override string S()
        {
            return "B";
        }
    }

    public A a;

the variable assigned to a will always be of type A after serialization. So if you reference the ScriptableObject in a prefab a will always be an A, even if you assigned a B to it. Hence, a.S() will always return “A”.

Continuous integration with Unity3D on Windows Server 2008 R2 with Virtualbox and Ubuntu

Here I will present the solution that works for us. Big thanks to all the helpers mentioned and not mentioned below, all the software developers that created the great tools, and all the others who made this possible.

Our Requirements

At work we want to automatically create builds of our Unity3D applications. Currently we use the continuous integration (CI) middleware Atlassian Bamboo. But the choice of the CI tool is up to you, you could also use Jenkins, Buildbot, TeamCity, or something else. Same with any other tool mentioned here, just use what you like best, or what is best in your context.

Here, Bamboo executes a NAnt task that calls Unity3D with some command line arguments to create a Windows Standalone Player build. Just like this

Bamboo -> NAnt -> Unity3d

Problems

We didn’t want to install DirectX on a Windows server machine and we discovered the Unity command line option -nographics that allows you to build on a headless server. Unfortunately creating a build through Unity3D with the -nographics command line option, not only disables the graphics requirements on the system running the Unity Editor creating the build, but also disables any visuals in the resulting Standalone Player. A black screen is all you will get…

Because we want fully functional builds with graphics, this is not acceptable for our use case. Hence, we decided we need to create the build without the -nographics option. As a consequence we installed DirectX and equipped the server machine with better graphics hardware. Soon we learned that Unity3D has no access to Direct3d9 when run in the context of a service (at that time Bamboo was running as a Windows service). Well, bummer.

But, as an alternative we can run Bamboo with a logged-in user over a Remote Desktop session. Of course it’s not as easy as it sounds at first: You have to stay logged-in into the Remote Desktop session (you may not even minimize the Remote Desktop window, unless you use this workaround, thanks to liortal53 for pointing out, although we did not use this nice trick). Basically this is not very practical. Fortunately there is a crazy workaround:

The Solution

We followed the tracks of a hero named Cygon. He/she deserves praise for asking and solving this problem before us. This made the solution public

http://www.gamedev.net/topic/522951-continuous-integration-and-xna/
http://forums.create.msdn.com/forums/p/24664/138642.aspx

That hero was guided by grandmaster timeBandit that gave the crucial tip at the gentoo forums

[1] http://forums.gentoo.org/viewtopic-p-5475029.html

This lead us to our solution:

We use VirtualBox which runs an instance of Ubuntu. Inside Ubuntu an Upstart script starts a vncserver which in turn runs rdesktop to create a Remote Desktop session to the Windows host server. Just like this

Windows Server 2008 R2 -> VirtualBox -> Ubuntu -> vncserver -> rdesktop -> Bamboo -> NAnt -> Unity3D

Running VirtualBox as a Windows Service

To run VirtualBox as a service at system start we use the free tool vboxctrl (link may occassionally be down, just retry). We did try different ways to setup custom Windows services, but this is not as easy as we thought and this tool made it easy for us. As described in the documentation you name the vboxctrl.exe with the name of the service you want to run, e.g. “Ubuntu rdesktop.exe” and you put the .ini file into the same directory with the similar name “Ubuntu rdesktop.ini”. Inside the .ini file you change the settings to your configuration. Here environment variables like %AAA% did not seem to work in the .ini file, but do setup the variables as indicated on the documentation of vboxctrl (the VirtualBox home, and the VirtualBox user config path). When everything is set up, your instance of Ubuntu will be started at server restarts. This gives us

Windows Server 2008 R2 -> VirtualBox -> Ubuntu

Unfortunately the tray icon tool of vboxctrl did not seem to work. But I don’t think we’ll need it because we should be able to start/stop individual virtual machines through the Windows Server Manager and the services configuration dialog.

Creating the persistent Remote Desktop Session

Inside Ubuntu you configure everything as indicated in [1]. We used a different user name and did not require /etc/conf.d/local.start and /etc/conf.d/local.stop. Instead we used an Upstart configuration that starts the vncserver. Also, in /home/user/.vnc/xstartup we provide the username and password of the Windows user to rdesktop (you have to setup a Windows user that is allowed to login per Remote Desktop and also has the permission to run Bamboo in the cmd console. Bamboo comes with a .bat for that). As always, it is never a good idea to provide plain text passwords in configuration files. Make sure your system is not accessible from the outside, and also not from the inside, by unauthorized users. The user you create for this setup should not have any administration rights. Also on Ubuntu’s side, don’t use an admin user. Create a regular user, e.g. with adduser on the command line.

If you need to test the rdesktop session from inside Ubuntu and you installed the server edition of Ubuntu, you can install xorg and fluxbox (via apt-get, aswell as other tools like vim that you might need). Run fluxbox with the command startx. Rightclick on the fluxbox desktop, navigate to open a terminal and enter your rdesktop command inside. But xorg and fluxbox are not required for this to work. Now we are here

Windows Server 2008 R2 -> VirtualBox -> Ubuntu -> vncserver -> rdesktop

Running Bamboo from the Desktop and not as a Service

Just login via Remote Desktop (from Ubuntu or from another Windows machine) as the user you just configured. In that users Windows Start menu, add the Bamboo console .bat into the Startup entries, so that it is started whenever that user logs on. Congrats! You did it

Windows Server 2008 R2 -> VirtualBox -> Ubuntu -> vncserver -> rdesktop -> Bamboo -> NAnt -> Unity3D

You should be able to start the VirtualBox service and the user you setup should log in automatically via Remote Desktop into your Windows server. As soon as this happens Bamboo should start as a process run by this user. You can inspect the users that are currently logged into the system in the users tab of the task manager application. And you can also check which user is running a process in the same program.

Unless I forgot any critical points, this should give you a nice working solution to automatically create builds with Unity3D.

Again, thanks to all who paved the way!

Picking EditorWindow GUI controls inside a ScrollView following other controls

If you render custom controls in an EditorWindow and you want to pick them with the mouse (maybe for drag and dropping), you can use

GUILayoutUtility.GetRect

to reserve a rectangle with automatic layout. You can then use the returned rectangle to draw the control later with the regular GUI methods that take rectangles as parameters.

When you put your controls inside a scroll view you need to account for the scroll offset by using the value returned by BeginScrollView.

For some reason the local coordinate system is also reset inside the ScrollView and the mouse position will be shifted by any controls rendered before the ScrollView. In the example below the mouse y coordinate will be off by the height of the Popup control and any additional controls. You can account for that by retrieving the yMax of the last control with

GUILayoutUtility.GetLastRect

as given in the example below.

// Some GUI controls before the ScrollView
selection = EditorGUILayout.Popup( selection , options, EditorStyles.toolbarPopup );

... more, maybe a label or two

// Retrieve the y offset for any previously rendered controls
var lastRectangle = GUILayoutUtility.GetLastRect();
float yOffset = lastRectangle.yMax;

scrollPosition = EditorGUILayout.BeginScrollView( scrollPosition );
{
    // Account for the scroll position
    var samplePosition = mousePosition + scrollPosition;

    // Account for the GUI controls before the ScrollView
    samplePosition.y -= yOffset;

    Rect rectangle = GUILayoutUtility.GetRect( new GUIContent( "foobar" ), normalStyle );

    if ( ( e.type == EventType.MouseDown ) && rectangle.Contains( samplePosition ) )
    {
        // Handle selection here

        // Draw item with selected style and rectangle
    }
    else
    {
        // Draw item with normal style and rectangle
    }
}
EditorGUILayout.EndScrollView();

Using DragAndDrop with Unity GUI

Some Unity functions are badly documented. I just could not find any detailed information on how to use the DragAndDrop functionality provided by the Unity GUI. Here is what I found out:

The following Event types are useful for DragAndDrop functionality:

DragUpdated seems to be raised whenever the mouse button is held down and the mouse moved after a DragAndDrop.StartDrag. DragPerform is called when the mouse button is released for the drop. Pressing the ESC key will raise the DragExited event, but so will leaving the current GUI area with the mouse cursor. This makes it unsuitable to detect whether the drag ended if you want to drag from one window to another. As soon as the mouse moves outside of the source window, Unity will fire DragExited. Because you have to press the mouse button down again to reinitiate a drag, you can reset your DragAndDrop logic on MouseDown. For instance I reset the object I want selected for the drag on the MouseDown event. MouseDrag is fired as long as the mouse is moved while a button is pressed. I use this event for initiating the drag.

Let’s look at what I am currently using. It’s basically what I am using to drag ‘Tasks’ around. Don’t worry about what the tasks are, they are just the objects that I am displaying with the GUI and dragging around.

void OnGUI()
{
    Event e = Event.current;

    if ( e.isMouse )
    {
        mousePosition = e.mousePosition;
    }

    // If mouse pressed or released

    if ( ( e.type == EventType.MouseUp ) || ( e.type == EventType.MouseDown ) )
    {
        // Clear our drag info in DragAndDrop so that we know that we are not dragging
        DragAndDrop.SetGenericData( DRAG_DATA_KEY, null );
    }

    FindAndListTasks( e, ref focusedTask );

    // If mouse is dragging, we have a focused task, and we are not already dragging a task

    if ( ( e.type == EventType.MouseDrag ) && ( focusedTask != null ) && !HasDraggedTask() )
    {
        StartDrag();

        // Use the event, else the drag won't start

        e.Use();
    }

    if ( e.type == EventType.DragUpdated )
    {
        // Indicate that we don't accept drags ourselves

        DragAndDrop.visualMode = DragAndDropVisualMode.Rejected;
    }
}

private void StartDrag()
{
    // Clear out drag data (doesn't seem to do much)
    DragAndDrop.PrepareStartDrag();

    //      Debug.Log( "dragging " + focusedTask );

    // Set up what we want to drag
    DragAndDrop.SetGenericData( DRAG_DATA_KEY, focusedTask );

    // Clear anything we don't use, else we might get weird behaviour when dropping on
    // some other control

    DragAndDrop.paths = null;
    DragAndDrop.objectReferences = new UnityEngine.Object[ 0 ];

    // Start the actual drag (don't know what the name is for yet)
    DragAndDrop.StartDrag( "Copy Task" );
}

Helper functions are

internal static void AcceptDraggedTask()
{
    DragAndDrop.AcceptDrag();
    ClearDraggedTask();
}

private static void ClearDraggedTask()
{
    DragAndDrop.SetGenericData( DRAG_DATA_KEY, null );
}

internal static bool HasDraggedTask()
{
    return ( DragAndDrop.GetGenericData( DRAG_DATA_KEY ) as Type ) != null;
}

internal static Type GetDraggedTask()
{
    return DragAndDrop.GetGenericData( DRAG_DATA_KEY ) as Type;
}

Then use something like

void OnGUI()
{
    var eventType = Event.current.type;

    if ( eventType == EventType.DragUpdated || eventType == EventType.DragPerform )
    {
        if ( TaskListWindow.HasDraggedTask() )
        {
            Type task = TaskListWindow.GetDraggedTask();

            // Indicate that we can accept the drag

            DragAndDrop.visualMode = DragAndDropVisualMode.Copy;

            if ( eventType == EventType.DragPerform )
            {
                Debug.Log( ( ( Type ) task ).Name );

                TaskListWindow.AcceptDraggedTask();
            }
        }
        else
        {
            DragAndDrop.visualMode = DragAndDropVisualMode.Rejected;
        }

        Event.current.Use();
    }
}

in the receiver.

You should clear all fields of DragAndDrop in order to cleanup any values set by a previous Unity DragAndDrop operation. Do something like the

DragAndDrop.paths = null;
DragAndDrop.objectReferences = new UnityEngine.Object[ 0 ];

I did above. Else you might get unexpected behaviour. Let me clarify this a bit. If you drag a file in the Project tab, Unity apparently sets the DragAndDrop.paths property. If later, you drag your own data onto the Project tab, and you don’t clear DragAndDrop.paths, then Unity will read the outdated DragAndDrop.paths value and move a file around in the Project tab. It is also a good idea to reset any generic data

DragAndDrop.SetGenericData( DRAG_DATA_KEY, focusedTask )

in your drag receiver, as it allows you to detect if your drag ended. You could even use a specific generic value (a boolean for instance) to track your drag state. Just use some naming convention that should not clash with other code, e.g. something like <NAMESPACE.VALUE>.

Actually I don’t know the intended usage of DragAndDrop.SetGenericData, DragAndDrop.objectReferences and DragAndDrop.paths, but I guess it is ok to use it for what you need. For instance, I currently store single objects I want to drag with a specific DRAG_DATA_KEY that should not interfer with other code. To detect if my drag operation finished, I set that key to null. Experiments showed that it is difficult to use the DragExited event to detect if a drag was aborted. Because it is also fired when the mouse cursor leaves the window as mentioned earlier. Thus, resetting your drag on the MouseDown event is easier.

I am totally aware that this is not a well structured tutorial, but it’s late and I wanted to get this info out as fast as possible. I hope its useful.

Happy coding!

Simulation of large and dense crowds on the GPU using OpenCL

This post provides videos related to my master thesis Simulation of large and dense crowds on the GPU using OpenCL.

The document contains detailed information about the implementation. It is based on the Continuum Crowds paper. Similar to March of the Froblins it runs on the graphics card, but uses OpenCL instead of shader programs. It also uses the original cost function of Continuum Crowds and expresses lane formation on the gradient field level.

It supports walls and tight environments. No velocity obstacle technique has been implemented (like the one in Froblins), but that would be possible to improve the movement. A basic binning algorithm is used for collision resolution.

The agent movement in the videos below shows very nervous agents. This can be tweaked/improved with the cost weights. But you might lose other properties doing so. Unfortunately I did not have the time to experiment much with different weights. The document gives an explanation of how extreme weights visually influence the movement.

The source for the application and the thesis document is available at

https://github.com/hduregger/crowd
https://github.com/hduregger/crowd_document

The following videos contain several scenes starting at 4096 agents on a 256×256 grid to more than 1 million (1048576) agents on a 1024×1024 grid.

Here is a short overview of the GUI interface for a scene with 4096 agents on a 256×256 grid.

  • 0:00 startup from the console
  • 0:10 simulation play, pause, single-stepping
  • 0:25 control panel (visualization, computation)
  • 0:32 view area, zooming, panning
  • 0:44 log window (scene information, system information)
  • 0:47 profiler window (detailed OpenCL kernel profiling)
  • 1:00 memory window (VRAM usage)
  • 1:09 counters for how many agents are active for each of the 4 agent groups, how many are parked, total agent count, field value under pointer, position, zoom

Visualization of the fields used for computing the navigation data (again the same scene with 4096 agents on a 256×256 grid).

  • 0:05 map overlay and discomfort field (infinite walls in white, low discomfort in red)
  • 0:15 density field (low density in red, high density in blue)
  • 0:22 average velocity direction. Color and arrows indicate direction.
  • 0:37 anisotropic speed field (values for each of the four directions north, east, south, west)
  • 1:02 anisotropic cost field
  • 1:17 potential field (zero potential at goal areas, potential increasing outwards over domain)
  • 1:43 gradient direction field (color coded, and later arrow overlay, nearest neighbor and bilinear filtering (as used by agents during movement update))
    The agents move against these gradients to their destinations.

Additional visualizations, including the tile update mechanism (again 4096 agents on a 256×256 grid).

  • 0:09 agent sprite rendering
  • 0:14 splat area visualization (areas that the agents contribute density and velocity to)
  • 0:24 tiles used during potential computation
  • 0:34 cells
  • 0:52 tile updates during potential computation
  • 1:53 outer iteration step influence on potential quality (step count has to be found empirically)

16384 agents on a 256×256 grid in a scene with different discomfort. Agents leave the area through their group exits in the upper left corner. They respawn on the right and bottom edge. In the lower right corner is a zone with discomfort forming a hill. Large discomfort separates the scene in the shape of a river. Later, a discomfort spot (brush) is set. It can be moved with the mouse to interact with the agents. The agents immediately start to evade the area of high discomfort.

  • 0:12 discomfort field (white are infinite walls, river of high discomfort, hills with varying discomfort)
  • 0:30 placing the discomfort brush that adds additional discomfort into the scene and allows interacting with agents
  • 0:35 agents evade the area of additional discomfort

65536 agents on a 256×256 grid starting to form a circle after some time. I accelerated part of the video to 4 x speed. Please excuse the stuttering/hickups, I could not prevent that from occurring during video processing. The video also does not show the complete sequence. The desktop recording application always aborted recording half way through, so I had to record the ending separately, and cut the video together.

  • 0:06 the 4 agent groups each have their goal area in a diagonal. Upon reaching the goal, an agent switches to the next group and goal area.
  • 0:18 nice lane formations
  • 0:25 video accelerated to 4x speed (please excuse the hickups originating from video encoding)
  • 1:25 after a few minutes a circle forms and you can clearly see the agents of the different groups
  • 1:40 detail view of agents changing group at goal area

More than 1 million (1048576) agents on a 1024×1024 grid and an otherwise empty scene. Agents are heading to the lower and left edges. Running at about 2.5 updates per second. Adding a discomfort brush that the agents evade. Showing the CPU usage, the computation runs primarily on the graphics card.

Corrections:

  • In Figure 21 the line leading into the Costs buffer should start at DiscomfortSum not at the DensitySum in the Mixed Buffer
  • Page 56: “The reason is that each Stream Core can load 32 consecutive  floats during a cycle, as mentioned in Section 3.2.2.” should be “The reason is that each Compute Unit can load 32 consecutive floats during a cycle, as mentioned in Section 3.2.2.”
  • Page 59: “IsConverged – Same as for Sleep, but with α = IsConverged in all
    state transitions.” is wrong, Figure 32 shows the correct state transitions.
  • Page 62: “To be more accurate, it would be necessary to also check the cost in the case on the right, because the cost also decides from which side the wavefront reached the central grid cell.” should be “To be more accurate, it would be necessary to also check the cost, because the cost also decides from which side the wavefront reached the central grid cell.”

Improvements:

  • Should’ve pointed out more clearly that March of the Froblins uses local navigation to get lane formation as an emergent phenomenon. In Continuum Crowds and this thesis the lane formation is based on the gradient field. Currently I don’t know all the implications of this difference.