Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views20 pages

VTR Pper

The document discusses the implementation of a Virtual Dressing Room using Kinect and OpenCV to enhance the online shopping experience by allowing customers to try on clothes virtually in real-time. It highlights the advantages of this technology, such as reducing the need for physical trial rooms and improving customer engagement through gesture-based controls. The study also addresses the challenges of e-commerce in the clothing sector and the potential of Virtual Fitting Rooms (VFR) to mitigate high return rates associated with online clothing purchases.

Uploaded by

vickychaum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views20 pages

VTR Pper

The document discusses the implementation of a Virtual Dressing Room using Kinect and OpenCV to enhance the online shopping experience by allowing customers to try on clothes virtually in real-time. It highlights the advantages of this technology, such as reducing the need for physical trial rooms and improving customer engagement through gesture-based controls. The study also addresses the challenges of e-commerce in the clothing sector and the potential of Virtual Fitting Rooms (VFR) to mitigate high return rates associated with online clothing purchases.

Uploaded by

vickychaum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

ISSN: 0974-5823 Vol. 7 (Special Issue, Jan.-Feb.

2022)
International Journal of Mechanical Engineering

Implementation of Virtual Dresing Room using


Kinect along with OpenCV
Bhagyashree Tingare Dr. Prasadu Peddi Dr. Prashant Kumbharkar
Research Scholar Guide Co-Guide
Computer Engineering Department Computer Engineering Department Computer Engineering Department
JJTU, Jhunjhunu, RJ JJTU, Jhunjhunu, RJ JJTU, Jhunjhunu, RJ

Abstract—Many companies and organisations have long waits for trial rooms while trying products on in real time. This method
processes garments in real time by using real-time technologies. Using real-time simulation, the first step is to create a human clone,
then use the subject's height and skin tone to give the skeleton a realistic look. GUI (Graphical User Interface) software is used in
combination with hardware sensors such as cameras, light, and motion. We'll be able to effortlessly handle the controllers by using
the Unity SDK and the newest Kinect Sensing element depth camera with an operating system that interfaces with user-friendly
applications. Consumers and merchants may like the user interface that results from this strategy. Using this new, more user-friendly
approach should result in more marketing activity. As addition as offering a proper dressing solution, the suggested system should
solve challenges that both the shop and the end user face. Most people picture themselves buying in a traditional brick-and-mortar
store when they think of it. Customers may try on things in real time if there aren't enough trial rooms available. Using a Virtual
Mirror to try on garments and browse for new ones has enormous possibilities. Thanks to the "Virtual Reality" concept, customers
may virtually try on a broad variety of things.The Virtual Trial Room concept has been investigated by several research groups, all
of which have come up with different solutions. Customer fittings may be done on a humanoid mannequin by using humanoid
models that mirror the user's actions. Option two: display a static picture of the apparel on screen and request that the user alter their
position in order to match where the garment is. "Virtual Mirror" detects and superimposes the wearer's apparel on a virtual trial
room in real time. The system gathers data before using the Kinect Sensor. For the generation of new data, Unity utilises the user's
character's skeletal data. After that, an outfit is created in Unity utilising skeletal data and then applied to the wearer in real time in
the game. Overlaid on the video is the user's outfit, which is shown on the screen.This method has the advantage of requiring less
time and effort to put on clothing. Customers may then buy the clothing by scanning the QR code that appears on the display screen.
This project assists in market management by reducing the need for clients to try on each piece of clothes. Thus, merchants save
time and space by not having to have as much product on-hand anymore.Gesture recognition for garment selection might be used
instead of a separate device to enhance this idea.
Keywords—Augmented reality, 3D Depth Cameras, Kinect Sensor, OpenCV, Depath Image, Skeleton, Unity SDK

I. INTRODUCTION
E-commerce, which is defined as the sale of products and services through the Internet, generates a lot of data. There has been a
USD 1,336 trillion increase in the total value of online items and services between 2014 and 2018[1], Another indicator of e-
commerce growth is the increasing percentage of online e-commerce sales against total sales (online and offline). Compared to the
previous year, online sales accounted for 10.2% of total global sales in 2017. (Bakopoulos, 2019). Several factors have contributed
to this, but one of the most significant is the propensity of Internet users to make clothes purchases. commerce's Statistics show that
clothing is the most popular online category worldwide, and it brings in a lot of money[2]. Clothes are an excellent choice for online
shopping due to the many advantages. This means you must be able to rapidly compare offers from multiple suppliers, swiftly adjust
your offer in reaction to change fashion trends (including discounts), and make purchase as simple as possible if you want a wide
range. As a seller, you have a unique set of challenges since the buyer wants to customise the product to fit his or her specific body
type or skin tone. This industry has a higher return rate than others in e-commerce. Returns may account for as much as 60% of
overall sales, posing a significant problem for internet-based enterprises. One solution may be to use a virtual fitting room
(hereinafter VFR). Anyone may virtually try on clothing before purchasing them using the VFR. This allows them to compare things
like size, fit, style, and colour. This gives the e-customer a sense of how it will look in other items before purchasing it. Customers
benefit from VFR since it acts as a "virtual mirror" to help them make better purchasing selections. Generation Y's proclivity to
utilise VFRs in buying choices is the subject of this study. VFR is having a hard time gaining traction since it is still a relatively
new option for online retailers. This means that Internet users, especially those from generation Y who are adept at using information
and communication technologies, have never heard of VFR before. Preliminary research is thus necessary to assess whether or not
VFR may be employed. This study will give an in-depth investigation into attitudes about and preparedness to use this kind of
application. Clothes impersonation on a two-dimensional likeness/photograph of a client (so-called 2D overlay) and clothes
imitation on a three-dimensional form based on the customer's measurements were investigated as one kind of VFR (the so-called
3D mannequin).

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
94
The essence and considerations of online sales of clothing
 In today's economy, e-commerce is now a given. Shoppers from across the world benefit from internet shopping every day
because of the following reasons[3][4]:
 Variety of products
 Variety of products, ability to check availability of products immediately
 ease of purchase, purchases made 24 hours a day
 self-configuration of products, reduced transaction costs
 opportunity to find a deal
 right to refund without giving reasons (on the European market)
 various payment options
 and easy product comparison are all advantages.

 When it comes to internet buying, the benefits are balanced by the drawbacks, which may include[10][12], there is a lack of
face-to-face connection with the salesperson, and the goods cannot be seen and touched, which raises data security and payment
security problems, as well as privacy concerns.
Online consumers are overwhelmingly convinced that the advantages of doing business with us outweigh the disadvantages. As
a result, e-commerce revenue continues to rise year after year.
Curent approach

 Curent approach can be summarized as follows:


 A combination of depth data and user label data is used to find out who is in the video.
 Positioning the 3D fabric models with the help of the skeleton tracker.
 As a result, body joints and the model are scaled based on Euclidean distances rather than on the user's proximity to the
sensor.
 Superimposition of the model on the user.
1. User Can select clothes and try by with gestures only.
2. User can try cloth set (for upper body and for lower body) together.
3. User can take screenshot by tapping in the air.
4. User can change category of clothes with raising hands and tapping in air.
5. User can feel realtime scenario while using Virtual Dressing Room.
6. User will be comfortable to use system even after multiple users will be detected by Kinect Sensor. That will not effect
on output of Primary user.
KINECT SENSOR:
 Microsoft creates the Kinect Sensor. An RGB camera, a depth sensor, and four microphones were all included in the Kinect
Sensor. An infrared (IR) sensor is incorporated within the depth sensor for better night vision. You'll need an IR camera if
you're using an IR sensor. Structured light is fundamental to the operation of an infrared camera. There is no other component
to the IR projector outside the infrared laser and the broadcast grating. IR projector geometry, IR camera geometry, and the
projected IR dot pattern are all well-documented facts of IR projection technology. A 3D picture may be projected if a dot on
the image matches a dot on the projector pattern.
 The Kinect sensor should be used to build an IR depth map. Since depth measurements should be represented by grey values,
the darker pixel is closer to the camera's focal point. The points may be too far apart or too near together to be estimated without
depth data (represented by black pixels), hence no representation will be created. If no depth information is collected.

Figure No..1: Kinect Sensor.


Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
95
 Specifications of Kinect cameras are as follows:

Table No. 1 Kinect Sensor Camera Specification


Resolution 1920*1080
RGB Image
fps 30 fps
Depth Sensor Resolution 512*424
fps 30 fps
Depth Measurement range 500-4500 mm
View Angle RGB Image Horizontal: 84.1 deg
Vertical: 53.8 deg
Depth Sensor Horizontal: 70 deg
Vertical: 60 deg
OpenCV:

The OpenCV computer vision library is well-known. If you're interested in learning more about computer vision, you'll need to
understand the foundations of OpenCV.

1. Reading an image
 Importing the OpenCV library import cv2
 Reading the image using imread() function image = cv2.imread('image.png')
 Extracting the height and width of an image h, w = image.shape[:2]
 Displaying the height and width print("Height = {}, Width = {}".format(h, w))

2. Extracting the RGB values of a pixel


 Extracting RGB values. Here we have randomly chosen a pixel by passing in 100, 100 for height and width. (B, G,
R) = image[100, 100]
 Displaying the pixel values print("R = {}, G = {}, B = {}".format(R, G, B))
 We can also pass the channel to extract the value for a specific channel B = image[100, 100, 0] print("B =
{}".format(B))
3. Extracting the Region of Interest (ROI)
 We will calculate the region of interest by slicing the pixels of the image
roi = image[100 : 500, 200 : 700]
4. Resizing the Image
 resize() function takes 2 parameters, the image and the dimensions
resize = cv2.resize(image, (800, 800))

The problem with this approach is that the aspect ratio of the image is not maintained. So we need to do some extra work in o rder
to maintain a proper aspect ratio.
 Calculating the ratio
ratio = 800 / w
 Creating a tuple containing width and height
dim = (800, int(h * ratio))
 Resizing the image
resize_aspect = cv2.resize(image, dim)

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
96
5. Rotating the Image
 Calculating the center of the image
center = (w // 2, h // 2)
 Generating a rotation matrix
matrix = cv2.getRotationMatrix2D(center, -45, 1.0)
 Performing the affine transformation
rotated = cv2.warpAffine(image, matrix, (w, h))

There are a lot of steps involved in rotating an image.


The 2 main functions used here are –
 getRotationMatrix2D()
 warpAffine()
 getRotationMatrix2D()
It takes 3 arguments –
 center – The center coordinates of the image
 Angle – The angle (in degrees) by which the image should be rotated
 Scale – The scaling factor

 It returns a 2*3 matrix consisting of values derived from alpha and beta
alpha = scale * cos(angle)
beta = scale * sine(angle)

 warpAffine()
The function warpAffine transforms the source image using the rotation matrix:
dst(x, y) = src(M11X + M12Y + M13, M21X + M22Y + M23)

Here M is the rotation matrix, described above. It calculates new x, y co-ordinates of the image and transforms it.

6. Drawing a Rectangle
 We are copying the original image, as it is an in-place operation.
output = image.copy()
 Using the rectangle() function to create a rectangle.
rectangle = cv2.rectangle(output, (1500, 900), (600, 400), (255, 0, 0),2)
 It takes in 5 arguments –
 Image
 Top-left corner co-ordinates
 Bottom-right corner co-ordinates
 Color (in BGR format)
 Line width

7. Displaying text
 Copying the original image
output = image.copy()

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
97
 Adding the text using putText() function
text = cv2.putText(output, 'OpenCV Demo', (500, 550), cv2.FONT_HERSHEY_SIMPLEX, 4, (255, 0, 0), 2)
 It takes in 7 arguments –
 Image
 Text to be displayed
 Bottom-left corner co-ordinates, from where the text should start
 Font
 Font size
 Color (BGR format)
 Line width

II. REVIEW OF LITERATURE


 With the advent of depth-sensing cameras, Microsoft released the Kinect in 2005. It was in Microsoft's "Boneyard," a collection
of potential technologies on which the company had given up working. There have been rumours that Microsoft is working on
an Xbox-specific 3D camera. [5]
1 According to this research by Srinivasan K. and Vivek S[23]., people's desire to maximise online shopping by purchasing
gowns with complete joy of personal realisation supports the need to create an algorithm that digitally dresses them in their
picked gown (2017). a silhouette of a guy in a noisy atmosphere against a changing backdrop. Which image processing step is
most difficult when working with a static picture.
2 Ari Kusumaningsih and Eko Mulyanto Yuniarno created virtual changing rooms for Madura batik garments (2017)[24][. The
planned approach asks for the building of a changing room specialised to Madura batik clothing in order to attract consumers
and increase sales. This would also help to promote Madura's culture. You'll need quick and effective calculation techniques to
cope with a large number of 3D models. The virtual dressing room may thus be created without the need of a powerful computer.
3 Ting Liu and LingZhi Li (2017)[25] suggested an avatar system that uses Kinect video feeds to follow skeletons and extract
persons. This technology might be used to match apparel models with customers. It was also possible to create virtual dressing
software that showed 3D garment representations from all angles when dressing. Improved garment modelling techniques and
faster reconstruction based on actual apparel are both advantages.
4 According to this research in [26], Kinect can quickly measure the length and circumference of a person's body. According to
the findings of the research, the Kinect sensor is able to collect human data. We reviewed and re-checked everything in case
there was a data error. Data collecting accuracy and the CG are two challenges that must be addressed in the future.
5 According to Dr. Anthony L. Brooks and Dr. Eva Peterson Brooks, the general public's responses to live demonstrations at
malls and Messe events were diverse (2014)[27]. Thirteen wheelchair users and others who believed that a specific enhancement
to the product would be advantageous to a wheelchair user provided direct feedback. The interface has to be near enough for
the user to see the control details in order for them to use it.
6 This study by Reizo NAKAMURA and Masaki IZUTSU[28] has methods for determining the size of body suits (2013) To
begin, Kinect is capable of recognising individuals. Person recognition data may also be used to derive the image's person area.
Contour tracing is used to obtain the user's mark points. It served as a rough estimate of the size of the body suits before any
were built.
7 Poonpong Boonbrahma and Charlee Kaewrat (2015)[29], A physical parameter from our experiment may be used to forecast
how the textiles would appear in the simulation. When creating a virtual fitting room in a virtual environment, the simulation
results can tell if a customer is wearing denim, satin, silk, or cotton.
8 They offer a novel virtual dressing room that makes use of depth sensor data (Umut Gültepe and Uur Güdükbay 2014)[30]. The
framework delivers a realistic fitting experience for common body types using specialised motion filters, body measurement,
and physical simulation. You may scale up or down the size of your avatar's body and clothes by adjusting the scaling settings.
You can also generate collision meshes and use the scaling technique to execute physics simulations. Adding data from an RGB
sensor will improve measurement accuracy and allow for more visual scaling in the future. We wish to increase the number of
collision spheres in order to enhance collision detection.
9 Ayushi Gahlot and Purvi Agarwal use Kinect technology and human skeletal tracking to identify activity (2016)[31]. Computer
vision-based human-computer interaction (HCI) technology such as Microsoft Kinect is at the bleeding edge of innovation
(Human Computer Interaction). In this work, the Kinect sensor uses real-time bone tracking and depth imaging
information to identify human body movements in 3D space. The way humans interact with technology has been
fundamentally transformed because to the widespread adoption of Kinect. It may be put to good use in a number of different
contexts. Kinect might be used to recognise skeletal-based activities, according to the study.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
98
 Some of Israel's most innovative mathematicians and engineers have been working on video games for the last several years,
trying to create the "next big thing." As a result of viewing PrimeSense's Xbox system technology shown at the 2006 Game
Developers Conference, Alex Kipman, Microsoft's general manager of hardware incubation, recognised promise for the Xbox
system. With PrimeSense, Microsoft started to examine ways to make the gadget more user-friendly, including improvements
in depth-detecting camera capabilities and a decrease in size and cost. They also discussed how to mass-produce the gadgets.
PrimeSense continues to work on these advancements over the next several years. [5]
 It was November when Nintendo finally debuted the Wii. For motion-controlled video games, Nintendo designed a special
portable controller called the Wii Remote, which was recognised by the Wii through a motion sensor bar installed on the
television screen. The success of the Wii prompted Microsoft to try with PrimeSense-based depth sensing, but they failed to
achieve the amount of motion tracking they hoped for. Only hand motions and the overall shape of a person were visible to
them, so they couldn't notice skeleton movements. As with two-thumbstick controllers, the idea of making something like a
Wii Remote came from Microsoft in the hopes that it would become mainstream. However, Microsoft's ultimate objective was
to erase all barriers between the user and the Xbox. [5]
 After Microsoft hired Kudo Tsunoda and Andrew Bennett, Kipman developed a novel strategy in 2008 that merged depth
sensing with machine learning to enhance bone tracking. Internally, Microsoft officials intended to move away from a motion
tracking method similar to the Wii in favour of a depth sensing solution to provide an item that went beyond the capabilities
of the Wii, which led to a strong demand for funding further development of the technology. The project was approved in its
final form late in 2008, and construction on it began in earnest in early 2009. [5]
 It was named "Natal" in honour of Kipman's birthplace in Brazil's state of Mato Grosso The word "natalLatin" derives from
the Latin root natus, which refers to a new generation of viewers they intended to reach with the new technology.[5] This first
study was heavily focused on ethnographic studies to better understand how gamers' homes were decorated, lit, and what they
did with Wii in order to better design how Kinect devices would be utilised. This study by Microsoft has led them to believe
that either the up and down depth detecting camera angle has to be manually changed or an expensive motor is required for it
to move automatically as a result. To prevent interfering with gamers' immersion in games, Microsoft's senior executives
decided to install the motor despite the added expense. The Kinect project also included packaging and performance
improvements for the mass manufacture of the device. Over the course of around 22 months, the hardware was developed. [5]
 Microsoft worked with software developers to integrate Kinect into the hardware development process. Kinect's capacity to
recognise several individuals at once made Microsoft imagine family-friendly games. Microsoft Studios' Good Science Studio's
Kinect Adventures was one of the first internal games for the gadget. The Kinect Adventures level "Reflex Ridge" is based on
the Japanese game "Brain Wall," in which players have a limited amount of time to bend their bodies to match cuts on a moving
wall. This kind of game helped advance Kinect technology by demonstrating the level of engagement they hoped to get with
it as a consequence of its design. [5]
 As the launch date approaches, it's more important than ever to do extensive Kinect testing in a range of settings and with a
variety of individuals while protecting the privacy of the device's data. Microsoft gave its workers the chance to check out
Kinect devices at home after establishing a company-wide campaign. Non-gaming Microsoft divisions such as Microsoft
Research, Windows, and Bing were also invited to assist with the finalisation of the system. A large-scale manufacturing
facility was built by Microsoft to mass-produce and test Kinect devices on a huge scale. It's now possible to do business with
it. [5]
 On June 1, 2009, Microsoft's E3 press conference launched Kinect as "Project Natal," with director Steven Spielberg and
Microsoft's Don Mattrick on hand to showcase the technology and its possibilities[5][6] .
 Ricochet and Paint Party from Microsoft, Peter Molyneux's Milo & Kate from Lionhead Studios, and a Kinect-enabled Burnout
Paradise from Criterion Games were also showcased during the briefing. This is the fifteenth page out of sixteen in the series.
Four persons may be monitored at once using bone mapping technology by E3 2009[7]. [8] [9] [10] A 30 Hz feature extraction
method was utilised on 48 human skeleton locations.[11][12] At E3, Microsoft did not commit to a precise release date for
Kinect to compete with the Wii and the PS Move. It's possible that it'll be launched in 2010, but that's not a guarantee. (The
mobile phone motion-sensing technology from Sony Interactive Entertainment)
 In the months after E3, a new Xbox 360 system with Kinect was unveiled, potentially in retail or as part of Microsoft's Project
Natal.[13][14] As a Kinect support upgrade in software or as a hardware change.[15][16] According to Microsoft, Kinect will
be compatible with all Xbox 360 systems, despite the rumours to the contrary. According to Microsoft, the rollout of Kinect
coincides with the launch of a new Xbox system platform, making it an important project for the corporation overall. According
to Microsoft Vice President Shane Kim, Kinect is unlikely to extend the Xbox 360's expected lifespan beyond 2015. In addition,
he said that the Xbox One will not be delayed because of Kinect's release. [8] [17] It was found that a handful of games
functioned well with the Kinect-based control schemes following the E3 2009 showcase. Beautiful Katamari and Space
Invaders Extreme were two games on exhibit at the 2009 Tokyo Game Show. [18] Older games won't be able to be patched to
function with the gadget in the future, according to Tsunoda, since incorporating Project Natal-based control into existing
games required considerable code adjustments. [19] Microsoft has increased the incentives for third-party developers to create
Kinect-based games for the Xbox One and Windows Phone 8. Harmonix and Double Fine saw the Kinect's potential quickly
and started making games for it [5].
 Microsoft said in January 2010 that it will remove a CPU from the Kinect sensor unit that was intended to perform bone
mapping operations. Instead, the processing would be handled by one of the Xenon CPU cores of the Xbox 360. [20] According
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
99
to Kipmen's initial estimates, just 10 to 15 percent of the Kinect's processing capacity would be used by the Xbox 360. [21]
Even though motion-tracking was just a minor component of Xbox 360's capabilities, industry observers predicted that adding
it to current games would increase the computing burden and push Xbox 360's limits. Their predictions include that developers
will create games with the Kinect sensor in mind[22].

III. DESIGN
As with in-store changing rooms, an online dressing room (also known as a virtual fitting room or as a virtual changing room) serves
a similar purpose. To check on one or more of the following: size, fit, and style — customers may do it online rather than physically.
There have been several reports on fit technologies since 2010,[32], but they are now accessible from a wider range of providers[33].
A rising number of well-known merchants have also begun to employ these technologies in their webstores. It is possible to classify
fit technology based on the issue it answers (such as size, fit, or style) or on the technical approach. There are several technology
approaches to choose from.

 Size recommendation services


The proposed size is determined by customer-recommendation systems. Due to the fact that size and fit are two different concepts,
these approaches can only provide basic information on fit. There are a variety of size recommendation systems on the market.
Algorithms created outside of clothes, such as ring sizers, may provide recommendations. Find My Ring Size is one of these
methods. [34] A number of ways combine measurements with existing items (known as biometric sizing), while others include
questions about personal style preferences as part of the recruitment process. A company's own brand items may have dimensions
generated from them, or suppliers' databases of garment design measurements may have.

 Body scanners
One form of body scanner uses webcams, phone cameras, or Microsoft's Kinect device, while the other uses more modern
technology and requires the customer to travel to the scanner themselves. If you're going to employ web or phone camera technology,
keep a certain distance from the camera and carry something that can serve as a size reference for the camera (like a CD).Due to
their complexity and expense, the most modern scanners, such as those using laser or millimetre wave detector technology or large
arrays of Kinect sensors, aren't seen in most businesses; instead, they're located in shopping malls and big department shops. A
customer must visit an area to be scanned and the data they supply used on websites. After the advent of Levi's[35] Intellifit system
in 2005, body scanner fit technology has been available for a while. Unique Solutions acquired Intellifit in 2009[36] and renamed
the company as Me-ality in October of that year. [37] Weekday's body scan jeans, for example, employ them to create customised
apparel. [38]

 3D solutions
A virtual world-like experience is created using computer-generated 3D visuals in 3D fitting rooms. Based on consumer body
measurements and form data, these technologies create a virtual mannequin (avatar). In order to build the avatar, customers must
input measurements of themselves. It is possible to customise the avatar in terms of race, skin tone, hairstyles, and even a picture of
the customer's own face. Customers may see how they'll appear while wearing the apparel, accessories, and other products you're
selling if you utilise the avatar in this manner. Some of the more advanced versions let you compare a variety of clothing styles side
by side and try on a variety of items all at the same time.

 3D customer’s model
With these technologies, customers may create a 3D model of themselves using data gleaned from scans or other sources, such as
photos or videos.

 Fitting room with real 3D simulation


This fitting room combines the advantages of 3D solutions with those of photo-accurate fitting rooms. Through the use of pictures
and basic body measurements, the approach builds a 3D mannequin that accurately depicts the customer dressed in various clothing
items. A size is often recommended for the consumer, but they may also choose other sizes to have a better understanding of how
well they'll fit.

 Dress-up mannequins/mix-and-match
This variant uses actual mannequins to photograph clothing and accessories. They then digitally remove the mannequins to reveal
a virtual representation of the brand in their stead. The customer may then drag and drop (as well as mix and match) the virtual

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
100
mannequin. It's possible that some of these technologies will be used to reduce the expense of human models in garment photography
by standardising photographic methods.

 Photo-accurate virtual fitting room


Dress-up mannequins are used in conjunction with genuine models to create this look. Photographs are made by using robotic
mannequins that can change shape and size to match buyers' bodies, rather than photographing products on real people. Using
computer-controlled mannequins, the mannequins quickly change from one body form and size to another while outfits – in each
size – are filmed and stored in a database. A computer-controlled mannequin makes the whole process go quickly.
It is customary to remove the mannequin from the final version of the images and replace it with an avatar that reflects the company's
branding.

When a customer inputs their measurements into the system, the database retrieves and displays to the customer the right group of
images — those in which the mannequin has the same proportions as the shopper.

 Augmented reality
In many VR augmented reality systems, a photograph of a garment or accessory is overlaid over a live video feed from the customer's
camera to create a virtual reality experience. The virtual garment or accessory will seem to be worn by the consumer in the video
thanks to an overlay 3D model or image that moves with the customer. A webcam, smartphone camera, or 3D camera like the Kinect
is usually required for the AR virtual dressing rooms to work. It's easy to see how this works using Zugara's Webcam Social Shopper.
The use of a 3D camera to alter portions of a garment or accessory inside a display is another example of augmented reality in
virtual dressing rooms. [284]

 Real models
Currently, the initial edition is readily accessible through a variety of online stores. The description of the object includes information
on its dimensions and the characteristics of the person wearing it. Some firms even go so far as to provide images of apparel on a
variety of models, each wearing a different size. A video of each model may be shown to customers, who can walk or move it
around on the screen to have a better sense of how the outfit would look on them.

3.2 System description


The basic setup of the Virtual Dressing Room consists of the following parts:
 Microsoft Kinect
 recording of the depth data
 capturing the rgb video stream

 Display / Screen (large)


 outputting the recorded video stream (mirrored!)
 the output is superimposed with the selected garment
 displaying the user interface for cloth selection

 Computer
 executing algorithms for skeleton tracking
 controlling the movement of cloth colliders
 combining of video stream and skeleton data (same viewpoint)
 computation for cloth physic simulations
 etc.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
101
Figure No. 2: Basic setup of the Virtual Dressing Room

Figure No. 3: Human Detection with Virtual Dressing Room

Figure No. 23 & 24 depicts the set up in detail. Using the Kinect camera, the scene is caught, as well as the user moving in front of
it. After a few seconds in the calibration posture, the tracking will commence. The screen displays a mirrored image with fabric
fragments strewn over it. The clothing may be selected using an on-screen menu that appears on the screen. In order to get a non-
interpolated skeleton, the whole body tracking distance must be provided. You must interpolate non-visible skeletal points in order
to see someone near enough. Use a simple wall or plane as a backdrop behind the recorded subject for uninterrupted skeletal tracking.

IV. METHODOLOGY
Following steps shows the overall flow of the system.
1. Start :
a) Plugin Kinect Sensor
b) Start Unity
2. Calibrate Sensor
a) For user detection
3. Find Joints
a) Kinect will find joint after detection of user
4. Plot Bones
Bones will be plotted by connection joint which was retrieved from kinect
5. Real Time Human Body Detection
a) Realtime background
b) Upper body and Lower Body
c) Age Detection
d) Gender detection
6. Superimpose Clothes
a) Change category
b) Change cloths
c) Select Multiple clothes at a time
7. Scale Clothes
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
102
8. Move According to Human Body
9. Capture Image
10. Share Image
11. Stop

ALGORITHMS
Following algorithms are used to implement the whole system.
a. Pose Estimation
b. Mean Shift
c. Interactive Closet Points
d. OpenCV user Detection
e. Human Detection Using Haar Cascades
f. Body Scaling
g. Face Detection
h. Euclidean Distance
POSE ESTIMATION ALGORITHM:
 Step1. Initialize kinect sensor and wait till sensor reflects IR
 Step2. Read depth data and inference map from kinect
 Step3. Get body joint points from kinect
 Step4. Map joint points and calculate Euclidean distance
 Step5. Scale size of body according to distance
 Step6. Calculate centroids and distance of each points using mean shift
 Step7. Plot joints in frame
 Step8. If body moves adjust joints according to centroids and distance between them
Mean Shift algorithm:
 Step 1 − First, start with the data points assigned to a cluster of their own.
 Step 2 − Next, this algorithm will compute the centroids.
 Step 3 − In this step, location of new centroids will be updated.
 Step 4 − Now, the process will be iterated and moved to the higher density region.
 Step 5 − At last, it will be stopped once the centroids reach at position from where it cannot move further.
Iterative closest point (ICP)
 Step 1: Match the reference point cloud's nearest point to each source point cloud point (from the dense set of vertices or
a selection of pairs of vertices from each model) (or a selected set).
 Step 2: You may use a root mean square strategy to reduce the metric distances between two points to estimate the
optimum alignment of each source point to the match found in the previous phase. A weighing component might be
included at this stage as well.
 Step 3: and eliminating anomalies before aligning.
 Step 4: In order to convert the source points, apply the transformation you just learned to them.
 Step 5: Iterate, if necessary (re-associate the points, and so on).

1. Steps To Carry Out Project Work.


1. The camera takes a photo of the user and transmits it to the software module for processing.
2. Face detection and body detection algorithms are used to look for human faces in the footage. After LK tracking, the images
are then masked.
2. Unwanted components of the video stream are removed. Segmentation and fitting are accomplished with the help of the Viola
Jones and Graph Cut algorithms.
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
103
3. The shirt image has now shown on top of the masked image.
4. The customer may decide whether or not to buy the item after trying it on.

Proposed Work
 We utilised a large-screen TV, a Microsoft Kinect motion sensor, an HD camera, and a desktop PC to build our virtual try-on
system. From the front, Figure No. 6 depicts an Interactive Mirror with a Kinect and an HD camera. In order to connect Xbox
360 and PC game systems, Microsoft developed the Kinect sensor. A depth camera, an RGB camera, and microphone arrays
make up the system. The horizontal and vertical fields of view of the depth and RGB cameras are 57.5 degrees and 43.5
degrees, respectively. Kinect features a -27 to +27 degree tilt range as well as an upward and downward tilt range. The depth
camera has a range of [0.8 4]m in normal mode and [0.4 3]m in close mode. The HD camera has a maximum resolution of
2080 x 1552 pixels and can capture photos at that resolution.
 We have to calibrate the Kinect and HD cameras and construct 3D outfits and accessories during the offline preprocessing
step. During the virtual try-on, we find the person in the target region who is the closest match. Described in Section 4 is how
two publicly accessible Kinect SDKs use the motion tracking component to keep tabs on the subject. An interactive mirror lets
you interact with it by moving your right hand across the UI (User Interface).
Proposed System Features:
 As said in the outset, trying on clothes in stores may be a time consuming process. The goal of this thesis is to create a virtual
fitting room where users may quickly try on a variety of outfits. To create a virtual look for the garment that is identical to the
actual material, a realistic appearance of the modelled cloth is also necessary. A number of factors affect how a garment
appears: Creating a 3D model of the cloth will take some time since it must be very realistic. Detail-rich mesh replicates the
human body's proportions in all of their subtlety. Second, the texture's look must be true to reality. Third, physics simulations
will be included into the programme to produce a garment that behaves realistically. So, if the user is putting on skirts and
moves in front of the Kinect, the material will have realistic and relevant qualities, such as fluttering. The Virtual Dressing
Room's realism is further enhanced by the material's ability to adapt to different body types. Due to the measurements taken
during the setup procedure, the cloth colliders, and consequently particular areas of the fabric, will be customised to fit the
user's body perfectly.

 Proposed system will provide following facilities.

 User Can select clothes and try by with gestures only.


 User can try cloth set (for upper body and for lower body) together.
 User can take screenshot by tapping in the air.
 User can share clicked image on social media i.e. Facebook.
 User can change category of clothes with raising hands and tapping in air.
 User can feel realtime scenerio while using Virtual Dressing Room.
 User can change background or he can customize background as per his choice.
 User will get cloth size recommendation on the screen in variety of form like (height, width and X,M,XXL etc.)
 User will get number of views for selected cloth on the screen.
 User will brand of Cloth on the screen.
 User will be comfortable to use system even after multiple users will be detected by Kinect Sensor. That will not effect on
output of Primary user.
 User will be able to see age and gender on the screen.

A. Class structure
 As you can see in Figure No. 66, there are many classes in the Virtual Dressing Room. An overview of the class structure is
provided in this section, including the names of the classes, their functions, and their connections.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
104
Figure No. 4: The Class Hierarchy of the application.

 The Interactive Engine of Unity


The second most important component is the cloth. Many fabric behaviour characteristics may be specified to tailor the physical
behaviour to the material. Colliders connect the garment to other objects to produce a lifelike duplicate of the cloth.

Foot

Figure No. 5: These black dots represent the various joint placements on the skeleton. The joints on the left side of the
skeleton may also be found on the right side.

In addition, the skeleton depicts the calibration stance. There's also an import feature for many 3D object storage formats (.obj,
.FBX, etc.). It's also able to read standard 3D programme formats directly, such as Cinema 4D's.c4d file. The three-dimensional
models that will depict bits of fabric will be imported using the object import method.

Mathematical Model
Let S be the Closed system defined as,
S = {Ip, Op, A, Ss, Su, Fi}
Where,
Ip=Set of Input,
Op=Set of Output,
Su= Success State,
Fi= Failure State,
A= Set of actions,
Ss= Set of user’s states.
Set of input= ip = {userbody, gestures}
Set of actions= A = {F1,F2,F3,F4,F5,F6}
Where,
F1 = Detection of user
Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)
International Journal of Mechanical Engineering
105
F2 = Pose Estimation
F3 = User display with super imposed clothes
F4 = Slide to change using gestures
F5 = show to size, ratings and recommendation of clothes
F6 = click image and share to social media

Set of user’s states=


Ss = {calibration state, detection state, selection of products, Check size, show ratings, show recommendations }

Set of output =
Op = {Superimposed clothes 3D image}
Su = Success state
= {Detection Success, Superimposing Success, Show info}
Fi = Failure State
= {Kinect failed, UserDetection failed}

Set of Exceptions = Ex
= {NullPointerException while detection state, GestureNotFound (InvalidGesture) while interaction state ,
NullValues Exception while showing state}

Turing Machine
M = {Q,P, ΓB, δ, q0, F}
Where,
o Q = Set of all state.
o P = Set of all input alphabet.
o Γ = Set of all tape symbol.
o B = Blank symbol.
o δ = Q × Γ→ × Γ → {L, R} is a transition function.
o q0 = Initial state.
o F = Set of all final result.
NP-Complete
 The proposed system comes under P complete problem. Because
 It is solvable problem.
 It is deterministic.
 If a graph for time and data will be drawn it will generate linear graph for time and distance. As data increases time also
increases.
 The complexity of such problem are O(n), O(n2 ), O(n3 )

 Suppose 1 gesture requires 1ms time to recognize then n gestures will requires n ms time to recognize. If graph for time and
gestures as a P complete problem will be generated then it will be linear.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
106
RESULTS

Graphical User Interface

The Figure No.6 shows the user interface. The unity software gives this interface when Kinect Sensor detect User.
Calibration Pose Testing model

The Figure No. 7 illustrates the working of Calibration Pose Testing model. Once user stands in T position the kinect consider it as
calibration pose because T pose is assigned as calibration pose in system.
Body Skeleton and Joints Detection Model

Fig No.8 illustrates the demonstration of Body Skeleton and Joints Detection Model working. In this model, user will be detected
by Kinect sensor. This model will detect user body skeleton and joints. It will highlight the skeleton with virtual model and joints
with green points as shown in figure. This model is developed for body detection with joints properly.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
107
Fig. No 9 illustrates the demonstration of Virtual Dressing Room Testing Module. The Virtual Dressing Room Testing Module is
shown in the diagram. The kinect sensor will be used to detect the user in this module. Calibration Pose will be provided by the
user. The system will begin to operate. The user will raise one hand to choose from a variety of clothing options. When the user
selects a cloth and taps it in the air, the cloth is applied to the user's body as indicated in the figure. To obtain the next cloth, the user
may slide. On the screen, the user will also see information such as the brand name, breadth, height, and suggested size. Users may
also take photos and post them on social networking sites like Facebook. The system may be used in the background in real time.
Multiple User Detection Testing Module

Fig No. 10 illustrates demonstration of Multiple User Detection Testing Module. In this module one user is continue to use system
and multiple user comes in background. Kinect can detect multiple user but it will work for the first user detected with calibration
pose.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
108
Table No. 4: Results of Identification for 5 Known Users
Users Successful Failed in 20 ms % Accuracy
Identification in 20
ms (Out of 5)
User 1 5 0 100%
User 2 4 1 90%
User 3 5 0 100%
User 4 3 2 80%
User 5 5 0 100%

7.10 Results of Identification for Multiple Users together


Table No. 5: Results of Identification for Multiple Users together
No. of Users Successful Failed in 20 ms % Accuracy
Identification in 20
ms (Out of 5)
2 Users 5 0 100%
3 Users 5 1 90%
4 Users 4 0 80%
5 Users 4 2 80%
6 Users 4 0 80%

7.11 Results of Identification User height


Table No. 6: Results of Identification User height
No. of Users Actual Size System Detected % Accuracy
Height
User 1 5 Feet 4 inch 5 Feet 4 inch 100%
User 2 5 Feet 5 inch 5 Feet 4 inch 90%
User 3 5 Feet 1 inch 5 Feet 90%
User 4 5 Feet 5 Feet 100%
User 5 5 Feet 3 inch 5 Feet 3 inch 100%

7.12 Feedback

After testing the Virtual Dressing Room, the users were given a questionnaire to fill out in order to answer some questions about it.
To begin, all participants used a conventional setup to evaluate the Virtual Dressing Room. A big screen was given special
consideration for testing reasons. Since it should be self-explanatory, no advise or introduction was provided. Even if it is constructed
at a fabric shop, no explanation can be provided to each and every customer. After becoming acquainted with the interaction portion,
the participants started to choose various garments. The participants next took a closer look at the Virtual Dressing Room's clothing
and the different features (e.g. changing the background). Finally, a questionnaire was given to fill out. The answer was evaluated
using a Likert scale, and the questionnaire included the following items.

 Do you have a passion for fashion?


 Do you believe a virtual fitting room would be beneficial?
 What did you think of the virtual clothing' overall appearance?
 Did the fact that clothing automatically suit your physique appeal to you?
 Was the Virtual Dressing Room interface straightforward and simple to comprehend?

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
109
 Do you believe that putting on clothing in various situations (in various settings) would aid you in choosing the right cloth?
 Do you like to view yourself as a virtual person (a virtual avatar) rather than in the mirror?
 Would you go into a store and utilise a virtual fitting room?
 Would you purchase a fabric based on a virtual try-on?

CONCLUSION
Summary & Conclusion

 Shoppers sometimes gripe about needing to spend hours physically putting on various ensembles as they look for items. If
you're working with a tight deadline and only have a limited amount of time, this might be taxing. This difficulty may be
solved by using a Virtual Mirror that serves as a virtual trial room. With a Kinect sensor, nodal points on the body are drawn,
and this data is used to produce an image of clothing over the user's body, eliminating the need to physically put on clothes
and therefore saving time for the wearer.

 This was followed by an explanation of the underlying research, which involved developing an avatar, creating clothing,
implementing real-time tracking systems, and comparing similar virtual try-ons. After that, the technologies and frameworks
used to build the Virtual Dressing Room were thoroughly examined. In the years that followed, various aspects of the design
process were given more attention, including making garment models. After this comes the implementation, which includes
details on the fabric colliders and how the garment behaves, for instance.

 We finished off with a look at how the Virtual Dressing Room functioned and appeared in regard to the tests. If you want a
quick, easy, and accurate way to try on clothes, the Virtual Dressing Room seems to be a great option. In order to have a
successful deployment, Microsoft Kinect is the perfect technology. Instead of spending time and money on the setup of other
technologies like augmented reality markers or live motion capture systems, this one is completely free. It's a great addition to
a textile company because of this. Additionally, the system can be easily set up at home with only a computer, a screen, and a
Kinect. For example, additional functionality for an online shop may be the outcome of this. Customers will be able to visually
try on clothing before making an online purchase, giving them a clearer idea of what the ensemble would look like and even
showing them how real material behaves. This is a huge advantage when compared to the standard internet shopping
experience.

The system performs following features:

 The user can only use gestures to choose and try on clothing.
 The user can test both the top and lower body textile sets.
 By tapping in the air, the user may capture a screenshot.
 The user can share the image he or she has clicked on social media, such as Facebook.
 By lifting hands and tapping in the air, the user may change the clothing category.
 When utilising Virtual Dressing Room, the user may experience a real-time scenario.
 The user can change the backdrop or modify it to his liking.
 On the screen, the user will see cloth size recommendations in a number of formats, such as (height, width and X,M,XXL etc.)
 On the screen, the user will see the number of views for the selected fabric.
 On the screen, the user will brand the cloth.
 Even if several users are recognised by the Kinect Sensor, the user will feel comfortable using the system. This will have no
impact on the output of the primary user.
 The user's age and gender will be displayed on the screen.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
110
9.2 Contribution Work

Project is the one dealing with depth camera. Therefore this method will use following types of technologies that are :

a. Markers
Markers is plotting human body points in an image.
b. Optical tracking
Illusion and motion tracking of human bodies.
c. Depth cameras
3D representation of clothes to provide wearing experience for real time user experience.
d. Gesture Controls
To use system and toggle through clothes gesture controls will be used with gesture recognition.
e. Use of OpenCV
System used Kinect SDK as well as OpenCV library to detect humans more precisely.
f. Calibration Pose
To begin the conversation Calibration posture isn't required. The programmer can customise the settings to meet his needs.
g. Detection Upper body and Lower body together
The system will detect both the upper and lower bodies. As a result, the user will be able to sample both the upper and
lower body fabric sets.
h. Size Recommendation
On the screen, the user receives size recommendations in the form of (height, width, and X,M,XXL, etc.) to assist in the
selection of appropriate clothing.
i. Realtime Scenario
When utilising the Virtual Dressing Room System, the user experiences a real-time scenario. He can also change the background to
his liking.
Recommendations

1. User can use this system by keeping less distance i.e. equal to oe less than 1.5
metre to get proper results.

2. To get effective effects user should use this system in better light.

3. User should use windows platform only because Kinect is made by Microsoft and it can work with Microsoft only.

9.3 Future Scope

 Commercial usage of Virtual Dressing Room is possible.


 It may be achieved by displaying output via a Magic Mirror.
 In the future, all clothing databases may be collected and integrated into our system, allowing users to view all accessible
clothing in one location. This will undoubtedly benefit both the client and the consumer.
 In the future, Virtual Dressing Room may be utilised in different stores as their own application, so instead of creating
numerous trial rooms, consumers can make a copy of our system and install many magic mirrors to boost shop popularity and
sell items without people using them.
 In addition to clothing, this method may be utilised for a variety of accessories.

9.4 Limitation of Research Work

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
111
1. Distance between user and kinect sensor:
The minimum distance between the user and the kinect sensor is 1.5 metres since it has to record the whole user body,
from head to toe. The kinect is used to detect humans, and opencv is used to separate the upper and lower bodies of
users. For this reason, it is necessary to capture the entire user body in frame, and the minimum distance between the user and the
kinect must be 1.5 metres or greater. If the space where the system is installed is insufficient, this might be described as a restriction.

2. Light Conditions:
Because cameras record reflections, light plays a significant role when working with cameras. In this instance, adequate
lighting is required to identify people smoothly and capture all joints. We also use unity in situations when the light direction has
been set, thus the light circumstances must match the stated directional light.

3. Platform:
Because the Kinect is a Microsoft product, it can only be used on Windows systems with dot net frameworks, making this solution
currently non-cross platform.
REFERENCES
[1] Statista, 2019a -1. Retail e-commerce sales worldwide from 2014 to 2021 (in billion U.S. dollars). [online] Available at:
https://www.statista.com /statistics/379046/worldwide-retail-e-commercesales/ [Accessed 15 August 2019].
[2] Statista, 2019b-2. Share of internet users who have purchased selected products online in the past 12 months as of 2018.
[online] Available at: https://www.statista.com/statistics/276846/reach-of-top-online-retail-categories-worldwide/
[Accessed 15 August 2019].
[3] Turban, E -3., Outland, J., King, D., Lee, J.K., Liang, T.P. and Turban, D.C., 2017. Electronic commerce 2018: a managerial
and social networks perspective. New York: Springer.
[4] Gemius, 2019 -4. E-commerce w Polsce. Gemius dla e-Commerce Polska (E-commerce In Poland. Gemius for e-Commerce
Poland). Warsaw. [online] Available at: https://www.gemius. pl/ecommerce2019/cac74bec2fac091ac0370e91 8bdf5e3e
[Accessed 15 August 2019].
[5] Gao, Y., Brooks, E.P. and Brooks, A.L., 2014. The performance of self in the context of shopping in a virtual dressing room
system. In: International Conference on HCI in Business. Cham: Springer, pp.307-315.
[6] Patodiya, P.K. and Birla, P., 2017. Impact of Virtual–Try–On Online Apparel Shopping Decisions. Review Journal of
Philosophy & Social Science, Vol. 42, No.2, pp.197-206.
[7] Hester, Blake (January 14, 2020). "All the money in the world couldn't make Kinect happen"
[8] "E3 2010: Project Natal is "Kinect"". IGN. June 13, 2010. Retrieved March 18, 2011.
[9] Totilo, Stephen (June 5, 2009). "Microsoft: Project Natal Can Support Multiple Players, See Fingers". Kotaku. Gawker
Media. Retrieved June 6, 2009.
[10] Takahashi, Dean (June 2, 2009). "Microsoft games exec details how Project Natal was born". VentureBeat. Retrieved June
6, 2009. The companies are doing a lot of great work with the cameras. But the magic is in the software. It’s a combination
of partners and our own software.
[11] Dudley, Brier (June 3, 2009). "E3: New info on Microsoft's Natal -- how it works, multiplayer and PC versions". Brier
Dudley's Blog. The Seattle Times. Retrieved June 3, 2009. We actually built a software platform that was what we wanted
to have as content creators. And then [asked], 'OK, are there hardware solutions out there that plug in?' But the amount of
software and the quality of software are really the innovation in Natal.
[12] Gibson, Ellie (June 5, 2009). "E3: Post-Natal Discussion". Eurogamer. Eurogamer Network. pp. 1–2. Retrieved June
9, 2009. Essentially we do a 3D body scan of you. We graph 48 joints in your body and then those 48 joints are tracked in
real-time, at 30 frames per second. So several for your head, shoulders, elbows, hands, feet…
[13] Wilson, Mark; Buchanan, Matt (June 3, 2009). "Testing Project Natal: We Touched the Intangible". Gizmodo. Gawker
Media. Retrieved June 6, 2009.
[14] French, Michael (November 11, 2009). "Natal launch details leak from secret Microsoft tour". MCV. Intent Media.
Retrieved November 11, 2009. November 2010 release, 5m units global ship, 14 games, and super-low sub-£50 price [...]
Microsoft is planning to manufacture 5m units for day one release, with a mix of console and camera plus solus SKUs
expected. [...] The device should cost under £100,00 when sold solo. The somewhat confirmed price is stated to be at
150$ (USD)when sold alone this is 50$ higher than the original 99$ projected price. [...] Another even says the camera could
even retail for just £30.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
112
[15] Channell, Mike (October 3, 2009). "Mark Rein Interview". Xbox 360: The Official Xbox Magazine. Future Publishing.
Retrieved October 11, 2009. And you know, I think they said they were going to ship Natal with every Xbox when they
actually launch the thing, so everybody will have one.
[16] Brightman, James (August 21, 2009). "Xbox 360 Slim? Analysts Weigh In". IndustryGamers. pp. 1–2. Archived from the
originalon December 24, 2009. Retrieved September 2, 2009.
[17] Kennedy, Sam (June 12, 2009). "Rumor: Xbox Natal is Actually Microsoft's Next Console". 1UP.com. UGO Entertainment.
Retrieved June 19, 2009.
[18] "Generation When?". Edge Online. Future plc. June 18, 2009. p. 2. Archived from the original on January 5, 2011.
Retrieved June 22, 2009. Since the NES, every five years or so a distinct new wave of technology has washed across the
industry, bringing with it new power and functions to a market galvanised by the promise of faster, better, more.
[19] Crecente, Brian (September 25, 2009). "Playing Space Invaders, Katamari Damacy on Natal". Kotaku. Gawker Media.
Retrieved September 26, 2009.
[20] Chester, Nick (September 25, 2009). "TGS 09: Patching older 360 games to work with natal not possible". Destructoid.
Retrieved September 26, 2009.
[21] Elliott, Phil (January 7, 2010). "Microsoft drops internal Natal chip". GamesIndustry.biz. Retrieved March 18, 2011.
[22] Plunkett, Luke (June 13, 2010). "Project Natal Officially Renamed "Kinect", More Games Revealed". Kotaku.
Retrieved January 17,2020.
[23] Srinivasan K., Vivek S., “ Implementation Of Virtual Fitting Room Using Image Processing ”, IEEE International
Conference on Computer, Communication and Signal Processing (ICCCSP-2017).
[24] Ari Kusumaningsih; Arik Kurniawati, Eko Mulyanto Yuniarno; Mochammad Hariadi, “ User Experience Measurement On
Virtual Dressing Room Of Madura Batik Clothes” , 2017 International Conference on Sustainable Information Engineering
and Technology (SIET) .
[25] Ting Liu, LingZhi Li, XiWen Zhang, “Real-time 3D Virtual Dressing Based on Users Skeletons” , The 2017 4th International
Conference on Systems and Informatics (ICSAI 2017) .
[26] Naoyuki Yoshino, Stephen Karungaru and Kenji Terada , “Body Physical Measurement using Kinect for Vitual Dressing
Room”, 2017 6th IIAI International Congress on Advanced Applied Informatics.
[27] Dr. Anthony L. Brooks , Dr. EvaPetersson Brooks , “Towards an Inclusive Virtual Dressing Room for Wheelchair-Bound
Customers”, IEEE conference, 2014.
[28] Reizo NAKAMURA, Masaki IZUTSU, and Shosiro HATAKEYAMA, “Estimation Method of Clothes Size for Virtual
Fitting Room with Kinect Sensor” ,2013 IEEE International Conference on Systems, Man, and Cybernetics.
[29] Poonpong Boonbrahma, Charlee Kaewrata, Salin Boonbrahma, “Realistic Simulation in Virtual Fitting Room Using Physical
Properties of Fabrics”, 2015 International Conference on Virtual and Augmented Reality in Education.
[30] Umut Gültepe, Uğur Güdükbayn, “Real-time virtual fitting with body measurement and motion smoothing”, U. Gültepe, U.
Güdükbay / Computers & Graphics 43 (2014) 31–43.
[31] Ayushi Gahlot, Purvi Agarwal, Akshay Agarwal, Vijai Singh & Amit Gautam, “ Skeleton based Human Action Recognition
using Kinect”, International Journal of Computer Applications (0975 – 8887) Recent Trends in Future Prospective in
Engineering & Management Technology 2016.
[32] Cordero, Robert (22 September 2010). "Can Technology Help Fashion Etailers Tackle 'Try Before You Buy?'". Business of
Fashion.
[33] Pierrepont, Nathalie (19 December 2012). "Amongst Promises of a Perfect Fit, What Fits and What Doesn't?". Business of
Fashion.
[34] Bishop, Kathryn (25 August 2011). "Innovative Online Ring Sizer Arrives for E-tailers". Professional Jeweller. Retrieved 1
January2014.
[35] White, Tanika (10 September 2005). "Jeans shoppers can have a fit thanks to Levi's". The Baltimore Sun. Baltimore.
[36] "Unique Solutions buys 3D body scanning rival Intellifit". Just-Style. 24 March 2009.
[37] "Unique Solutions Begins Rollout Of Me-Ality Apparel Size-Matching Booths". Apparel Textiles. 21 October 2011.
[38] Velasquez, Angela (2020-11-03). "Weekday Launches Custom-Fit 'Body Scan Jeans' in Sweden". Sourcing Journal.
Retrieved 2021-05-03.

Copyrights @Kalahari Journals Vol. 7 (Special Issue, Jan.-Feb. 2022)


International Journal of Mechanical Engineering
113

You might also like