Frequently Asked Questions

Please browse the following list of frequently asked questions before communicating with us.

  • How should I cite the J4K library in my paper?
    You can cite the following paper that first presented the use of this library:
    A. Barmpoutis. "Tensor Body: Real-time Reconstruction of the Human Body and Avatar Synthesis from RGB-D", IEEE Transactions on Cybernetics, Special issue on Computer Vision for RGB-D Sensors: Kinect and Its Applications, October 2013, Vol. 43(5), pp. 1347-1356.

  • How can I download the J4K library?
    The J4K library for Kinect is included in the UFDW Java Library that can be downloaded from the download page (click here).

  • In just a few words how does J4K work exactly, how does it process the information from Kinect, and how can we access this information?
    J4K uses Microsoft Kinect SDK through a DLL library that was specially made to communicate with Java. This DLL opens various streams of the Kinect Sensor and creates an independent thread that controlls the events of the kinect streams. Whenever a kinect event occurs, the thread calls a Java method such as onDepthFrameEvent, onSkeletonFrameEvent, onColorFrameEvent, etc. The data related to this event are passed as arguments to these callback Java methods. Java programmers who use J4K just need to create a class that extends the J4KSDK class and provide implementation to these "on...Event" methods, i.e. code what should happen when a new skeleton frame is received, or a new depth frame is received, etc. You can see several simple examples at:

  • Is there any documentation available?
    Yes. The API of the 4 core classes in the J4K library for Kinect is presented in detail in the web-sites that can be found on the menu on the left of this page and listed also here: J4KSDK.class API, DepthMap.class API, Skeleton.class API, VideoFrame.class API.

  • How can I use the short[] depth_frame data provided in the onDepthFrameEvent method?
    The values in the depth_frame array are not consistent between different versions of Kinect. For example in the old Kinects this array contains the IDs of the players and the depth information packed together as 13+3 bits. Also the data are stored as unsigned shorts, therefore you have to take care of the sign if you indeed want to use this array. However all this is simply taken care of if you use the J4K API, so you don't have to deal with all these details. J4K unpacks the data for you and gives you the depth in the XYZ array (you must initialize the J4KSDK.XYZ stream along with your other streams, when you start() the kinect), and similarly the player IDs are given in the player_index array. The units in the XYZ array is meters, and the coordinates of the first point are: x=XYZ[0]; y=XYZ[1]; z=XYZ[2]; the coordinates of the second point are: x=XYZ[3]; y=XYZ[4]; z=XYZ[5]; etc. You can use the XYZ array as a depth map using the DepthMap.class API, and you can find more stream flags at: J4KSDK.class API.

  • How can I map the video frames to the depth frames?
    The mapping infromation is included in the UV array provided by the onDepthFrameEvent method (you must initialize the J4KSDK.UV stream along with your other streams, when you start() the kinect). If you take from the XYZ array the X,Y,Z coordinates of a particular data point, say X[point] Y[point] Z[point], this point corresponds to the video pixel location U[point],V[point] from the UV array. The video pixel locations are given in the interval 0-1, where the left-most pixel is 0 and the right-most is 1, etc. So the bottom line is that you don't need to move any point in the 3D space. X,Y,Z is already in the correct position and the color of it is in the video pixel location U,V. You can use the XYZ and UV arrays together to create a colored depth map using the DepthMap.class API.

  • How can I covert my XED files using XEDConvertApp?
    The purpose of the source code example XEDConvertApp is to train you on how to use the Java library for Kinect (J4K) so that you can open a xed file and save it in your own custom format.
       Before you do anything else it is essential that you read: and then follow the steps to install the source code in your computer:
       If you just want to use the Java binary compiled from the XEDConvertApp source code (not recommended because it is just an example and therefore saves the data in our custom format not yours, which can be easily done if you change the source code) you can do the following:
    1) Connect the kinect sensor to your computer and a power outlet
    2) Open the XEDConvertApp application
    3) Open one of your XED files using the Microsoft Kinect Studio
    4) Connect the Kinect Studio with XEDConvertApp by clicking on the thunder icon of the Microsoft Kinect Sudio.
    5) Play the xed file from the Microsoft Kinect Studio
    6) Record in real time the output of the xed file from the XEDConvertApp by specifying the output file name

  • Where is the source code?
    The J4K library for Kinect is included in the UFDW Open-Source Java Library. The source code can be downloaded using git from the UFDW Git repository as an eclipse project. If you need help follow the instructions in the download page (see Section about Open Source project).

  • I have another question, who should I contact?
    You can contact Prof. Angelos Barmpoutis. His contact details can be found here.

  • Disclaimer: The names JAVA and KINECT and their associated logos are trademarks of their respective copyright owners Oracle and Microsoft. None of these companies endorse, fund, or are in any way associated with the J4K library.

    Disclaimer: This software is provided for free without any warranty expressed or implied for academic, research, and strictly non commercial purposes only. By downloading this library you accept the Terms and Conditions.

    University of Florida, Digital Worlds Institute, P.O.Box 115810, 101 Norman Gym, Gainesville, FL 32611-5810, USA