Source Code Examples


This website offers several source code examples of Kinect-enabled applications in Java. The examples use the J4K Java library, which implements a Java binding for the Microsoft's Kinect SDK. The classes in the J4K library communicate directly with a native Windows library, which handles the video, depth, and skeleton streams of the Kinect using the Java Native Interface (JNI). More information about the J4K library can be found in this link.



The examples contain the following applications:
  • KinectViewerApp A simple Kinect viewer that visualizes the live depth frames as a 3D surface and also shows the detected skeleton and video streams. It also offers many choices for changing the resolution and type of streams and for controlling other parameters of the sensors. More details...
       
  • ImageAvatarApp This is a simple Java application that uses the skeleton streams of a Kinect sensor and visualizes them as simple cardboard avatars, composed out of flat, 2D images. It also offers choices for changing the appearance of the avatars and the 3D scene. More details...
       
  • XEDConvertApp A simple application that can convert Kinect stream files recorded in XED format, by playing them in this Kinect-enabled example program and record the depth frames in a raw binary file that you can open later by your own programs. You can also use this application to record a live sequence of depth frames. More details...
       
  • RAWViewerApp This example shows how to open raw depth files recorded using the XEDConvertApp. The raw depth frames are visualized as 3D surfaces. More details...
       
  • WRLConvertApp This application shows how to convert and save depth frames in VRML (.wrl) format so that they can be opened by other applications for 3D modeling and animation. More details...
       
  • MultipleKinectApp In this example, the video, depth, and skeleton streams from 2 Kinect sensors are opened and shown in a graphical user layout similar to the one used in KinectViewerApp. This application can be easily extended to handle more Kinect sensors. It is assumed that your computer has sufficient USB bandwidth to support more than one Kinect sensor. More details...
       

  • How to write your own Kinect-Java programs


    This tutorial will show you how to develop your own computer vision applications using Kinect in Java.


    The most important part of code in all the provided examples is the definition of a Java class that handles the live streams received from a Kinect sensor. You can easily define such a class in less than 10 lines of java code!

    Here is how:


    import edu.ufl.digitalworlds.j4k.J4KSDK;
    import edu.ufl.digitalworlds.j4k.DepthMap;
    import edu.ufl.digitalworlds.j4k.Skeleton;
    import edu.ufl.digitalworlds.j4k.VideoFrame;

    /*This class is an implementation of the abstract class J4KSDK.
      It is a simple example of source code that shows how to read 
      depth, video, and skeleton frames from a Kinect sensor.*/

    public class Kinect extends J4KSDK{

    First of all, your class should extend the J4KSDK class from the J4K library, as shown above. Optionally, you can define your own constructor and custom parameters for your class, as shown in the example below.


        /*This object will hold the current video frame received from 
          the Kinect video camera.*/
     
        VideoFrame videoTexture;

        /*The constructor of the class initializes the native Kinect
          SKD libraries and creates a new VideoFrame object.*/

        public Kinect() {
            super();
            videoTexture=new VideoFrame();
        }

    Finally, you have to implement the interface of the 3 methods that will be automatically called every time a new depth, video, or skeleton frame is received from the Kinect sensor. You can customize the content of these 3 methods according to the needs of your application. An example of such implementation is shown below.


        /*The following method will run every time a new depth frame is
          received from the Kinect sensor. The packed data frame is
          converted into a DepthMap object, with U,V texture mapping if
          available.*/

        @Override
        public void onDepthFrameEvent(short[] packed_depth, int[] U, int V[]) {
            
            DepthMap map=new DepthMap(depthWidth(),depthHeight(),packed_depth);
            if(U!=null && V!=null) map.setUV(U,V,videoWidth(),videoHeight());
        }

        /*The following method will run every time a new skeleton frame
          is received from the Kinect sensor. The skeletons are converted
          into Skeleton objects.*/
     
        @Override
        public void onSkeletonFrameEvent(float[] data, boolean[] flags) {

            Skeleton skeletons[]=new Skeleton[KinectSDK.NUI_SKELETON_COUNT];
            for(int i=0;i<KinectSDK.NUI_SKELETON_COUNT;i++)
              skeletons[i]=Skeleton.getSkeleton(i, data, flags);        
        }

        /*The following method will run every time a new video frame
          is received from the Kinect video camera. The incoming frame
          is passed to the videoTexture object to update its content.*/
        
        @Override
        public void onVideoFrameEvent(byte[] data) {    

            videoTexture.update(videoWidth(), videoHeight(), data);
        }
    }

    That's it! A simpler implementation can be written in less than 10 lines. As easy as A-B-C!


    Disclaimer: The names JAVA and KINECT and their associated logos are trademarks of their respective copyright owners Oracle and Microsoft. None of these companies endorse, fund, or are in any way associated with the J4K library.

    Disclaimer: This software is provided for free without any warranty expressed or implied for academic, research, and strictly non commercial purposes only. By downloading this library you accept the Terms and Conditions.

    University of Florida, Digital Worlds Institute, P.O.Box 115810, 101 Norman Gym, Gainesville, FL 32611-5810, USA