VTR Pper
VTR Pper
2022)
International Journal of Mechanical Engineering
Abstract—Many companies and organisations have long waits for trial rooms while trying products on in real time. This method
processes garments in real time by using real-time technologies. Using real-time simulation, the first step is to create a human clone,
then use the subject's height and skin tone to give the skeleton a realistic look. GUI (Graphical User Interface) software is used in
combination with hardware sensors such as cameras, light, and motion. We'll be able to effortlessly handle the controllers by using
the Unity SDK and the newest Kinect Sensing element depth camera with an operating system that interfaces with user-friendly
applications. Consumers and merchants may like the user interface that results from this strategy. Using this new, more user-friendly
approach should result in more marketing activity. As addition as offering a proper dressing solution, the suggested system should
solve challenges that both the shop and the end user face. Most people picture themselves buying in a traditional brick-and-mortar
store when they think of it. Customers may try on things in real time if there aren't enough trial rooms available. Using a Virtual
Mirror to try on garments and browse for new ones has enormous possibilities. Thanks to the "Virtual Reality" concept, customers
may virtually try on a broad variety of things.The Virtual Trial Room concept has been investigated by several research groups, all
of which have come up with different solutions. Customer fittings may be done on a humanoid mannequin by using humanoid
models that mirror the user's actions. Option two: display a static picture of the apparel on screen and request that the user alter their
position in order to match where the garment is. "Virtual Mirror" detects and superimposes the wearer's apparel on a virtual trial
room in real time. The system gathers data before using the Kinect Sensor. For the generation of new data, Unity utilises the user's
character's skeletal data. After that, an outfit is created in Unity utilising skeletal data and then applied to the wearer in real time in
the game. Overlaid on the video is the user's outfit, which is shown on the screen.This method has the advantage of requiring less
time and effort to put on clothing. Customers may then buy the clothing by scanning the QR code that appears on the display screen.
This project assists in market management by reducing the need for clients to try on each piece of clothes. Thus, merchants save
time and space by not having to have as much product on-hand anymore.Gesture recognition for garment selection might be used
instead of a separate device to enhance this idea.
Keywords—Augmented reality, 3D Depth Cameras, Kinect Sensor, OpenCV, Depath Image, Skeleton, Unity SDK
I. INTRODUCTION
E-commerce, which is defined as the sale of products and services through the Internet, generates a lot of data. There has been a
USD 1,336 trillion increase in the total value of online items and services between 2014 and 2018[1], Another indicator of e-
commerce growth is the increasing percentage of online e-commerce sales against total sales (online and offline). Compared to the
previous year, online sales accounted for 10.2% of total global sales in 2017. (Bakopoulos, 2019). Several factors have contributed
to this, but one of the most significant is the propensity of Internet users to make clothes purchases. commerce's Statistics show that
clothing is the most popular online category worldwide, and it brings in a lot of money[2]. Clothes are an excellent choice for online
shopping due to the many advantages. This means you must be able to rapidly compare offers from multiple suppliers, swiftly adjust
your offer in reaction to change fashion trends (including discounts), and make purchase as simple as possible if you want a wide
range. As a seller, you have a unique set of challenges since the buyer wants to customise the product to fit his or her specific body
type or skin tone. This industry has a higher return rate than others in e-commerce. Returns may account for as much as 60% of
overall sales, posing a significant problem for internet-based enterprises. One solution may be to use a virtual fitting room
(hereinafter VFR). Anyone may virtually try on clothing before purchasing them using the VFR. This allows them to compare things
like size, fit, style, and colour. This gives the e-customer a sense of how it will look in other items before purchasing it. Customers
benefit from VFR since it acts as a "virtual mirror" to help them make better purchasing selections. Generation Y's proclivity to
utilise VFRs in buying choices is the subject of this study. VFR is having a hard time gaining traction since it is still a relatively
new option for online retailers. This means that Internet users, especially those from generation Y who are adept at using information
and communication technologies, have never heard of VFR before. Preliminary research is thus necessary to assess whether or not
VFR may be employed. This study will give an in-depth investigation into attitudes about and preparedness to use this kind of
application. Clothes impersonation on a two-dimensional likeness/photograph of a client (so-called 2D overlay) and clothes
imitation on a three-dimensional form based on the customer's measurements were investigated as one kind of VFR (the so-called
3D mannequin).
When it comes to internet buying, the benefits are balanced by the drawbacks, which may include[10][12], there is a lack of
face-to-face connection with the salesperson, and the goods cannot be seen and touched, which raises data security and payment
security problems, as well as privacy concerns.
Online consumers are overwhelmingly convinced that the advantages of doing business with us outweigh the disadvantages. As
a result, e-commerce revenue continues to rise year after year.
Curent approach
The OpenCV computer vision library is well-known. If you're interested in learning more about computer vision, you'll need to
understand the foundations of OpenCV.
1. Reading an image
Importing the OpenCV library import cv2
Reading the image using imread() function image = cv2.imread('image.png')
Extracting the height and width of an image h, w = image.shape[:2]
Displaying the height and width print("Height = {}, Width = {}".format(h, w))
The problem with this approach is that the aspect ratio of the image is not maintained. So we need to do some extra work in o rder
to maintain a proper aspect ratio.
Calculating the ratio
ratio = 800 / w
Creating a tuple containing width and height
dim = (800, int(h * ratio))
Resizing the image
resize_aspect = cv2.resize(image, dim)
It returns a 2*3 matrix consisting of values derived from alpha and beta
alpha = scale * cos(angle)
beta = scale * sine(angle)
warpAffine()
The function warpAffine transforms the source image using the rotation matrix:
dst(x, y) = src(M11X + M12Y + M13, M21X + M22Y + M23)
Here M is the rotation matrix, described above. It calculates new x, y co-ordinates of the image and transforms it.
6. Drawing a Rectangle
We are copying the original image, as it is an in-place operation.
output = image.copy()
Using the rectangle() function to create a rectangle.
rectangle = cv2.rectangle(output, (1500, 900), (600, 400), (255, 0, 0),2)
It takes in 5 arguments –
Image
Top-left corner co-ordinates
Bottom-right corner co-ordinates
Color (in BGR format)
Line width
7. Displaying text
Copying the original image
output = image.copy()
III. DESIGN
As with in-store changing rooms, an online dressing room (also known as a virtual fitting room or as a virtual changing room) serves
a similar purpose. To check on one or more of the following: size, fit, and style — customers may do it online rather than physically.
There have been several reports on fit technologies since 2010,[32], but they are now accessible from a wider range of providers[33].
A rising number of well-known merchants have also begun to employ these technologies in their webstores. It is possible to classify
fit technology based on the issue it answers (such as size, fit, or style) or on the technical approach. There are several technology
approaches to choose from.
Body scanners
One form of body scanner uses webcams, phone cameras, or Microsoft's Kinect device, while the other uses more modern
technology and requires the customer to travel to the scanner themselves. If you're going to employ web or phone camera technology,
keep a certain distance from the camera and carry something that can serve as a size reference for the camera (like a CD).Due to
their complexity and expense, the most modern scanners, such as those using laser or millimetre wave detector technology or large
arrays of Kinect sensors, aren't seen in most businesses; instead, they're located in shopping malls and big department shops. A
customer must visit an area to be scanned and the data they supply used on websites. After the advent of Levi's[35] Intellifit system
in 2005, body scanner fit technology has been available for a while. Unique Solutions acquired Intellifit in 2009[36] and renamed
the company as Me-ality in October of that year. [37] Weekday's body scan jeans, for example, employ them to create customised
apparel. [38]
3D solutions
A virtual world-like experience is created using computer-generated 3D visuals in 3D fitting rooms. Based on consumer body
measurements and form data, these technologies create a virtual mannequin (avatar). In order to build the avatar, customers must
input measurements of themselves. It is possible to customise the avatar in terms of race, skin tone, hairstyles, and even a picture of
the customer's own face. Customers may see how they'll appear while wearing the apparel, accessories, and other products you're
selling if you utilise the avatar in this manner. Some of the more advanced versions let you compare a variety of clothing styles side
by side and try on a variety of items all at the same time.
3D customer’s model
With these technologies, customers may create a 3D model of themselves using data gleaned from scans or other sources, such as
photos or videos.
Dress-up mannequins/mix-and-match
This variant uses actual mannequins to photograph clothing and accessories. They then digitally remove the mannequins to reveal
a virtual representation of the brand in their stead. The customer may then drag and drop (as well as mix and match) the virtual
When a customer inputs their measurements into the system, the database retrieves and displays to the customer the right group of
images — those in which the mannequin has the same proportions as the shopper.
Augmented reality
In many VR augmented reality systems, a photograph of a garment or accessory is overlaid over a live video feed from the customer's
camera to create a virtual reality experience. The virtual garment or accessory will seem to be worn by the consumer in the video
thanks to an overlay 3D model or image that moves with the customer. A webcam, smartphone camera, or 3D camera like the Kinect
is usually required for the AR virtual dressing rooms to work. It's easy to see how this works using Zugara's Webcam Social Shopper.
The use of a 3D camera to alter portions of a garment or accessory inside a display is another example of augmented reality in
virtual dressing rooms. [284]
Real models
Currently, the initial edition is readily accessible through a variety of online stores. The description of the object includes information
on its dimensions and the characteristics of the person wearing it. Some firms even go so far as to provide images of apparel on a
variety of models, each wearing a different size. A video of each model may be shown to customers, who can walk or move it
around on the screen to have a better sense of how the outfit would look on them.
Computer
executing algorithms for skeleton tracking
controlling the movement of cloth colliders
combining of video stream and skeleton data (same viewpoint)
computation for cloth physic simulations
etc.
Figure No. 23 & 24 depicts the set up in detail. Using the Kinect camera, the scene is caught, as well as the user moving in front of
it. After a few seconds in the calibration posture, the tracking will commence. The screen displays a mirrored image with fabric
fragments strewn over it. The clothing may be selected using an on-screen menu that appears on the screen. In order to get a non-
interpolated skeleton, the whole body tracking distance must be provided. You must interpolate non-visible skeletal points in order
to see someone near enough. Use a simple wall or plane as a backdrop behind the recorded subject for uninterrupted skeletal tracking.
IV. METHODOLOGY
Following steps shows the overall flow of the system.
1. Start :
a) Plugin Kinect Sensor
b) Start Unity
2. Calibrate Sensor
a) For user detection
3. Find Joints
a) Kinect will find joint after detection of user
4. Plot Bones
Bones will be plotted by connection joint which was retrieved from kinect
5. Real Time Human Body Detection
a) Realtime background
b) Upper body and Lower Body
c) Age Detection
d) Gender detection
6. Superimpose Clothes
a) Change category
b) Change cloths
c) Select Multiple clothes at a time
7. Scale Clothes
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
102
8. Move According to Human Body
9. Capture Image
10. Share Image
11. Stop
ALGORITHMS
Following algorithms are used to implement the whole system.
a. Pose Estimation
b. Mean Shift
c. Interactive Closet Points
d. OpenCV user Detection
e. Human Detection Using Haar Cascades
f. Body Scaling
g. Face Detection
h. Euclidean Distance
POSE ESTIMATION ALGORITHM:
Step1. Initialize kinect sensor and wait till sensor reflects IR
Step2. Read depth data and inference map from kinect
Step3. Get body joint points from kinect
Step4. Map joint points and calculate Euclidean distance
Step5. Scale size of body according to distance
Step6. Calculate centroids and distance of each points using mean shift
Step7. Plot joints in frame
Step8. If body moves adjust joints according to centroids and distance between them
Mean Shift algorithm:
Step 1 − First, start with the data points assigned to a cluster of their own.
Step 2 − Next, this algorithm will compute the centroids.
Step 3 − In this step, location of new centroids will be updated.
Step 4 − Now, the process will be iterated and moved to the higher density region.
Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further.
Iterative closest point (ICP)
Step 1: Match the reference point cloud's nearest point to each source point cloud point (from the dense set of vertices or
a selection of pairs of vertices from each model) (or a selected set).
Step 2: You may use a root mean square strategy to reduce the metric distances between two points to estimate the
optimum alignment of each source point to the match found in the previous phase. A weighing component might be
included at this stage as well.
Step 3: and eliminating anomalies before aligning.
Step 4: In order to convert the source points, apply the transformation you just learned to them.
Step 5: Iterate, if necessary (re-associate the points, and so on).
Proposed Work
We utilised a large-screen TV, a Microsoft Kinect motion sensor, an HD camera, and a desktop PC to build our virtual try-on
system. From the front, Figure No. 6 depicts an Interactive Mirror with a Kinect and an HD camera. In order to connect Xbox
360 and PC game systems, Microsoft developed the Kinect sensor. A depth camera, an RGB camera, and microphone arrays
make up the system. The horizontal and vertical fields of view of the depth and RGB cameras are 57.5 degrees and 43.5
degrees, respectively. Kinect features a -27 to +27 degree tilt range as well as an upward and downward tilt range. The depth
camera has a range of [0.8 4]m in normal mode and [0.4 3]m in close mode. The HD camera has a maximum resolution of
2080 x 1552 pixels and can capture photos at that resolution.
We have to calibrate the Kinect and HD cameras and construct 3D outfits and accessories during the offline preprocessing
step. During the virtual try-on, we find the person in the target region who is the closest match. Described in Section 4 is how
two publicly accessible Kinect SDKs use the motion tracking component to keep tabs on the subject. An interactive mirror lets
you interact with it by moving your right hand across the UI (User Interface).
Proposed System Features:
As said in the outset, trying on clothes in stores may be a time consuming process. The goal of this thesis is to create a virtual
fitting room where users may quickly try on a variety of outfits. To create a virtual look for the garment that is identical to the
actual material, a realistic appearance of the modelled cloth is also necessary. A number of factors affect how a garment
appears: Creating a 3D model of the cloth will take some time since it must be very realistic. Detail-rich mesh replicates the
human body's proportions in all of their subtlety. Second, the texture's look must be true to reality. Third, physics simulations
will be included into the programme to produce a garment that behaves realistically. So, if the user is putting on skirts and
moves in front of the Kinect, the material will have realistic and relevant qualities, such as fluttering. The Virtual Dressing
Room's realism is further enhanced by the material's ability to adapt to different body types. Due to the measurements taken
during the setup procedure, the cloth colliders, and consequently particular areas of the fabric, will be customised to fit the
user's body perfectly.
A. Class structure
As you can see in Figure No. 66, there are many classes in the Virtual Dressing Room. An overview of the class structure is
provided in this section, including the names of the classes, their functions, and their connections.
Foot
Figure No. 5: These black dots represent the various joint placements on the skeleton. The joints on the left side of the
skeleton may also be found on the right side.
In addition, the skeleton depicts the calibration stance. There's also an import feature for many 3D object storage formats (.obj,
.FBX, etc.). It's also able to read standard 3D programme formats directly, such as Cinema 4D's.c4d file. The three-dimensional
models that will depict bits of fabric will be imported using the object import method.
Mathematical Model
Let S be the Closed system defined as,
S = {Ip, Op, A, Ss, Su, Fi}
Where,
Ip=Set of Input,
Op=Set of Output,
Su= Success State,
Fi= Failure State,
A= Set of actions,
Ss= Set of user’s states.
Set of input= ip = {userbody, gestures}
Set of actions= A = {F1,F2,F3,F4,F5,F6}
Where,
F1 = Detection of user
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
105
F2 = Pose Estimation
F3 = User display with super imposed clothes
F4 = Slide to change using gestures
F5 = show to size, ratings and recommendation of clothes
F6 = click image and share to social media
Set of output =
Op = {Superimposed clothes 3D image}
Su = Success state
= {Detection Success, Superimposing Success, Show info}
Fi = Failure State
= {Kinect failed, UserDetection failed}
Set of Exceptions = Ex
= {NullPointerException while detection state, GestureNotFound (InvalidGesture) while interaction state ,
NullValues Exception while showing state}
Turing Machine
M = {Q,P, ΓB, δ, q0, F}
Where,
o Q = Set of all state.
o P = Set of all input alphabet.
o Γ = Set of all tape symbol.
o B = Blank symbol.
o δ = Q × Γ→ × Γ → {L, R} is a transition function.
o q0 = Initial state.
o F = Set of all final result.
NP-Complete
The proposed system comes under P complete problem. Because
It is solvable problem.
It is deterministic.
If a graph for time and data will be drawn it will generate linear graph for time and distance. As data increases time also
increases.
The complexity of such problem are O(n), O(n2 ), O(n3 )
Suppose 1 gesture requires 1ms time to recognize then n gestures will requires n ms time to recognize. If graph for time and
gestures as a P complete problem will be generated then it will be linear.
The Figure No.6 shows the user interface. The unity software gives this interface when Kinect Sensor detect User.
Calibration Pose Testing model
The Figure No. 7 illustrates the working of Calibration Pose Testing model. Once user stands in T position the kinect consider it as
calibration pose because T pose is assigned as calibration pose in system.
Body Skeleton and Joints Detection Model
Fig No.8 illustrates the demonstration of Body Skeleton and Joints Detection Model working. In this model, user will be detected
by Kinect sensor. This model will detect user body skeleton and joints. It will highlight the skeleton with virtual model and joints
with green points as shown in figure. This model is developed for body detection with joints properly.
Fig No. 10 illustrates demonstration of Multiple User Detection Testing Module. In this module one user is continue to use system
and multiple user comes in background. Kinect can detect multiple user but it will work for the first user detected with calibration
pose.
7.12 Feedback
After testing the Virtual Dressing Room, the users were given a questionnaire to fill out in order to answer some questions about it.
To begin, all participants used a conventional setup to evaluate the Virtual Dressing Room. A big screen was given special
consideration for testing reasons. Since it should be self-explanatory, no advise or introduction was provided. Even if it is constructed
at a fabric shop, no explanation can be provided to each and every customer. After becoming acquainted with the interaction portion,
the participants started to choose various garments. The participants next took a closer look at the Virtual Dressing Room's clothing
and the different features (e.g. changing the background). Finally, a questionnaire was given to fill out. The answer was evaluated
using a Likert scale, and the questionnaire included the following items.
CONCLUSION
Summary & Conclusion
Shoppers sometimes gripe about needing to spend hours physically putting on various ensembles as they look for items. If
you're working with a tight deadline and only have a limited amount of time, this might be taxing. This difficulty may be
solved by using a Virtual Mirror that serves as a virtual trial room. With a Kinect sensor, nodal points on the body are drawn,
and this data is used to produce an image of clothing over the user's body, eliminating the need to physically put on clothes
and therefore saving time for the wearer.
This was followed by an explanation of the underlying research, which involved developing an avatar, creating clothing,
implementing real-time tracking systems, and comparing similar virtual try-ons. After that, the technologies and frameworks
used to build the Virtual Dressing Room were thoroughly examined. In the years that followed, various aspects of the design
process were given more attention, including making garment models. After this comes the implementation, which includes
details on the fabric colliders and how the garment behaves, for instance.
We finished off with a look at how the Virtual Dressing Room functioned and appeared in regard to the tests. If you want a
quick, easy, and accurate way to try on clothes, the Virtual Dressing Room seems to be a great option. In order to have a
successful deployment, Microsoft Kinect is the perfect technology. Instead of spending time and money on the setup of other
technologies like augmented reality markers or live motion capture systems, this one is completely free. It's a great addition to
a textile company because of this. Additionally, the system can be easily set up at home with only a computer, a screen, and a
Kinect. For example, additional functionality for an online shop may be the outcome of this. Customers will be able to visually
try on clothing before making an online purchase, giving them a clearer idea of what the ensemble would look like and even
showing them how real material behaves. This is a huge advantage when compared to the standard internet shopping
experience.
The user can only use gestures to choose and try on clothing.
The user can test both the top and lower body textile sets.
By tapping in the air, the user may capture a screenshot.
The user can share the image he or she has clicked on social media, such as Facebook.
By lifting hands and tapping in the air, the user may change the clothing category.
When utilising Virtual Dressing Room, the user may experience a real-time scenario.
The user can change the backdrop or modify it to his liking.
On the screen, the user will see cloth size recommendations in a number of formats, such as (height, width and X,M,XXL etc.)
On the screen, the user will see the number of views for the selected fabric.
On the screen, the user will brand the cloth.
Even if several users are recognised by the Kinect Sensor, the user will feel comfortable using the system. This will have no
impact on the output of the primary user.
The user's age and gender will be displayed on the screen.
Project is the one dealing with depth camera. Therefore this method will use following types of technologies that are :
a. Markers
Markers is plotting human body points in an image.
b. Optical tracking
Illusion and motion tracking of human bodies.
c. Depth cameras
3D representation of clothes to provide wearing experience for real time user experience.
d. Gesture Controls
To use system and toggle through clothes gesture controls will be used with gesture recognition.
e. Use of OpenCV
System used Kinect SDK as well as OpenCV library to detect humans more precisely.
f. Calibration Pose
To begin the conversation Calibration posture isn't required. The programmer can customise the settings to meet his needs.
g. Detection Upper body and Lower body together
The system will detect both the upper and lower bodies. As a result, the user will be able to sample both the upper and
lower body fabric sets.
h. Size Recommendation
On the screen, the user receives size recommendations in the form of (height, width, and X,M,XXL, etc.) to assist in the
selection of appropriate clothing.
i. Realtime Scenario
When utilising the Virtual Dressing Room System, the user experiences a real-time scenario. He can also change the background to
his liking.
Recommendations
1. User can use this system by keeping less distance i.e. equal to oe less than 1.5
metre to get proper results.
2. To get effective effects user should use this system in better light.
3. User should use windows platform only because Kinect is made by Microsoft and it can work with Microsoft only.
2. Light Conditions:
Because cameras record reflections, light plays a significant role when working with cameras. In this instance, adequate
lighting is required to identify people smoothly and capture all joints. We also use unity in situations when the light direction has
been set, thus the light circumstances must match the stated directional light.
3. Platform:
Because the Kinect is a Microsoft product, it can only be used on Windows systems with dot net frameworks, making this solution
currently non-cross platform.
REFERENCES
[1] Statista, 2019a -1. Retail e-commerce sales worldwide from 2014 to 2021 (in billion U.S. dollars). [online] Available at:
https://www.statista.com /statistics/379046/worldwide-retail-e-commercesales/ [Accessed 15 August 2019].
[2] Statista, 2019b-2. Share of internet users who have purchased selected products online in the past 12 months as of 2018.
[online] Available at: https://www.statista.com/statistics/276846/reach-of-top-online-retail-categories-worldwide/
[Accessed 15 August 2019].
[3] Turban, E -3., Outland, J., King, D., Lee, J.K., Liang, T.P. and Turban, D.C., 2017. Electronic commerce 2018: a managerial
and social networks perspective. New York: Springer.
[4] Gemius, 2019 -4. E-commerce w Polsce. Gemius dla e-Commerce Polska (E-commerce In Poland. Gemius for e-Commerce
Poland). Warsaw. [online] Available at: https://www.gemius. pl/ecommerce2019/cac74bec2fac091ac0370e91 8bdf5e3e
[Accessed 15 August 2019].
[5] Gao, Y., Brooks, E.P. and Brooks, A.L., 2014. The performance of self in the context of shopping in a virtual dressing room
system. In: International Conference on HCI in Business. Cham: Springer, pp.307-315.
[6] Patodiya, P.K. and Birla, P., 2017. Impact of Virtual–Try–On Online Apparel Shopping Decisions. Review Journal of
Philosophy & Social Science, Vol. 42, No.2, pp.197-206.
[7] Hester, Blake (January 14, 2020). "All the money in the world couldn't make Kinect happen"
[8] "E3 2010: Project Natal is "Kinect"". IGN. June 13, 2010. Retrieved March 18, 2011.
[9] Totilo, Stephen (June 5, 2009). "Microsoft: Project Natal Can Support Multiple Players, See Fingers". Kotaku. Gawker
Media. Retrieved June 6, 2009.
[10] Takahashi, Dean (June 2, 2009). "Microsoft games exec details how Project Natal was born". VentureBeat. Retrieved June
6, 2009. The companies are doing a lot of great work with the cameras. But the magic is in the software. It’s a combination
of partners and our own software.
[11] Dudley, Brier (June 3, 2009). "E3: New info on Microsoft's Natal -- how it works, multiplayer and PC versions". Brier
Dudley's Blog. The Seattle Times. Retrieved June 3, 2009. We actually built a software platform that was what we wanted
to have as content creators. And then [asked], 'OK, are there hardware solutions out there that plug in?' But the amount of
software and the quality of software are really the innovation in Natal.
[12] Gibson, Ellie (June 5, 2009). "E3: Post-Natal Discussion". Eurogamer. Eurogamer Network. pp. 1–2. Retrieved June
9, 2009. Essentially we do a 3D body scan of you. We graph 48 joints in your body and then those 48 joints are tracked in
real-time, at 30 frames per second. So several for your head, shoulders, elbows, hands, feet…
[13] Wilson, Mark; Buchanan, Matt (June 3, 2009). "Testing Project Natal: We Touched the Intangible". Gizmodo. Gawker
Media. Retrieved June 6, 2009.
[14] French, Michael (November 11, 2009). "Natal launch details leak from secret Microsoft tour". MCV. Intent Media.
Retrieved November 11, 2009. November 2010 release, 5m units global ship, 14 games, and super-low sub-£50 price [...]
Microsoft is planning to manufacture 5m units for day one release, with a mix of console and camera plus solus SKUs
expected. [...] The device should cost under £100,00 when sold solo. The somewhat confirmed price is stated to be at
150$ (USD)when sold alone this is 50$ higher than the original 99$ projected price. [...] Another even says the camera could
even retail for just £30.