I had been thinking of tweaking OpenNI samples to work with modified data. It could be as a result of post processing on the Depth/Rgb images in Matlab or OpenCV. Mostly we record data from OpenNI in the form of '.oni' file and then want to process them in Matlab or OpenCV. For this we need to convert the images to OpenCV images and then can save to disk. Later load these images in Matlab or OpenCV and do processing and then resave them as '.oni' files to run the OpenNI samples on the modified data.
For the first part you can either use OpenNI and convert them to OpenCV images and write them to disk.
You can convert images to OpenCV images in the following way.
For the first part you can either use OpenNI and convert them to OpenCV images and write them to disk.
You can convert images to OpenCV images in the following way.
cv::Mat colorArr[3];
cv::Mat colorImage;
const XnRGB24Pixel* pImageRow;
const XnRGB24Pixel* pPixel;
imageGen. SetPixelFormat(XN_PIXEL_ FORMAT_RGB24 ); //
xn::ImageGenerator imageGen;
imageGen.GetMetaData( imageMD); //xn::ImageMetaData imageMD;
pImageRow = imageMD.RGB24Data();
colorArr[0] = cv::Mat(imageMD.YRes(), imageMD.XRes(),CV_8U);
colorArr[1] = cv::Mat(imageMD.YRes(), imageMD.XRes(),CV_8U);
colorArr[2] = cv::Mat(imageMD.YRes(), imageMD.XRes(),CV_8U);
for (int y=0; y<imageMD.YRes(); y++)
cv::Mat colorImage;
const XnRGB24Pixel* pImageRow;
const XnRGB24Pixel* pPixel;
imageGen.
xn::ImageGenerator imageGen;
imageGen.GetMetaData(
pImageRow = imageMD.RGB24Data();
colorArr[0] = cv::Mat(imageMD.YRes(),
colorArr[1] = cv::Mat(imageMD.YRes(),
colorArr[2] = cv::Mat(imageMD.YRes(),
for (int y=0; y<imageMD.YRes(); y++)
{
pPixel = pImageRow;
uchar* Bptr = colorArr[0].ptr<uchar>(y);
uchar* Gptr = colorArr[1].ptr<uchar>(y);
uchar* Rptr = colorArr[2].ptr<uchar>(y);
for(int x=0;x<imageMD.XRes();++x , ++pPixel)
pPixel = pImageRow;
uchar* Bptr = colorArr[0].ptr<uchar>(y);
uchar* Gptr = colorArr[1].ptr<uchar>(y);
uchar* Rptr = colorArr[2].ptr<uchar>(y);
for(int x=0;x<imageMD.XRes();++x , ++pPixel)
{
Bptr[ x] = pPixel->nBlue;
Gptr[ x] = pPixel->nGreen;
Rptr[ x] = pPixel->nRed;
}
pImageRow += imageMD.XRes();
}
cv::merge(colorArr,3, colorImage);
Bptr[
Gptr[
Rptr[
}
pImageRow += imageMD.XRes();
}
cv::merge(colorArr,3,
The second way is to directly use the Matlab Toolboox for kinect and save the images from there.
For the second part i.e how to compile these images as an '.oni' file , I used the OpenNI program NiRecordSynthetic. So I provide a dummy .oni file as an input and then change its data according to my new depth maps and record a new .oni file out of it. This is what I do in the transformMD() function of this program
***************************************************************
char filename[100];
static int count = 1;
sprintf(filename,"filename.png",count++); // here i assume that images are numbered in a sequence
IplImage* img=0;
img=cvLoadImage(filename,CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR); // Reading 16 bit depth maps
if(!img) printf("Could not load image file: %s\n",filename); /
DepthMap& depthMap = depthMD.WritableDepthMap();
for (XnUInt32 y = 0; y < depthMap.YRes(); y++)
{
for (XnUInt32 x = 0; x < depthMap.XRes(); x++)
{
CvScalar s;
s=cvGet2D(img,y,x); // getting the value in s
depthMap(x,y) = s.val[0]; // setting the value to the modified depth pixel
}
}
cvReleaseImage(&img);
*****************************************************************************
In order to convert .oni images to OpenCV images..do i need to have OpenNI?..or just openccv itself?..i have error at XnRGB24Pixel when running it in visual studio..can u help me on this?
ReplyDelete@Sue Yes the way it is defined here, it assumes that streaming is from Openni program and when you get an image in the openni format, how to convert it to Opencv during runtime. XnRGB24Pixel is the data type for RGB pixels defined in Openni so if you are not using openni then it will give error.
ReplyDeleteTo avoid openni you can use only opencv as well. The later versions of opencv provide the option of streaming depth/rgb images from kinect.You can try that.
Or you can use Matlab SDK for kinect as well.
Hope it helps .
Hey...I am working on openni for the first time. I need some help, so I have this .oni file of depth images and I need to extract frames from this...how can I do this???
ReplyDeleteHi Shivani,
ReplyDeleteYou can have a look at NiSimpleRead sample of Openni to read the .oni files. Once you have a frame in the original format you can convert it to Opencv format and then save them either as Video or save them to disk as single frames. For converting the frames to Opencv format you can use the same approach as mentioned in this post.
#Reference NiSimpleRead.cpp for help
Could you post a link to the project please?
ReplyDelete@haggag87 link to which project ?
ReplyDeleteIf you are talking about the project for this conversion, I haven't really put it online anywhere, might do it in future, btw this project would mainly contain the changes which i have defined in these excerpts :) .
Hi Tayyab,
ReplyDeleteIs there any way to covert rgb and depth image to .oni. Actually I want to covert the rgb and depth image to XYZRGBA point clod. do you have any suggestion?
Thanks
Hi Rahim,
ReplyDeleteHave a look at the
pcl::ONIGrabber Class of PCL
It has a member function "convertToXYZRGBAPointCloud" and takes in Depth and RGB image.
Hope it helps.
Hi Tayyab,
ReplyDeleteI am a new user to OpenNI. I want to create a sequence of depth images and specify the depth for their pixels and convert it as a depth stream.In other words, I want to simulate a depth image, is there any way to do this?
Thanks in advance!
Hi Hai,
DeleteI guess it shouldn't be much of a problem. Please have a look at one of my other posts where one can use mock depth generator to run other utilities on the modified depth data or in your case just fill in the custom depth values and then saving them with ROS or otherwise. Basically one should be able to simulate the depth stream using these tools. Hope it helps.
http://tayyabnaseer.blogspot.de/2012/04/running-ros-openni-tracker-with.html
Hi Tayyab,
ReplyDeletefirst of all, thanks for the above code! It was very useful!
Now, i have a small issue, converting image sequences to oni files:
I am opening a dummy oni file and i can create an outfile with my images, but only if the input sequence has fewer frames than the dummy file!!!!!
Using a huge dummy file is not a good solution, so i was wondering if you have any idea how to adjust the size of the file, or add extra frames at the end, or any other way it can be done!!!
Thank you in advance!
Do you know if there's a way to do the same (modify\create oni files) in OpenNI 2? Looks like NiRecordSynthetic sample is no longer available in 2.0.
ReplyDeleteHi Tayyab,
ReplyDeleteI am totally new to this openni and has little computing background knowledge so I sincerely hope you can assist me on this.
Basically I have been trying to capture images using the niviewer under openni. However, I do not know how to save those images.
I understand that you have been trying to explain on the processes but I don't really understand. Could I trouble you to provide a step by step explanation on doing so?
Thanks
Kelvin
I am programming in C++ and have been able to save a .oni depth file for testing purposes using OpenNI2.
ReplyDeleteI am now however unable to load this .ONI file back into a program to use. My code used is as follows:
Status rc_ = openni::STATUS_OK;
Device device_;
rc_ = OpenNI::initialize();
char* fileaddress = "C:\\Users\\James\\DepthRecording1.oni";
rc_ = device_.open(fileaddress);
After the .open line has been executed, rc_ has the following status : STATUS_NO_DEVICE.
This is however the method of loading files as specified in the OpenNI2 documentation, and this code is proven to work if file address is switch for "openni::ANY_DEVICE", in which case it loads my 3D camera (Orbbec Astra).
Does anyone know what needs to be changed/ added in order to be able to load the file?
how to convert .xed files into .bin or avi files?
ReplyDelete