HCI Lab-Manual1
HCI Lab-Manual1
LONIKAND, PUNE
Lab Manual
Elective I Laboratory
Human Computer Interface (317525)
TE-SEM-1
LAB TEACHER
Lab Manual
Elective-I Laboratory
Human Computer Interface (317525)
TE-SEM-1
Prepared by
Prof. Trupti Farande
GROUP A
Assignment No. 1
Title : Assignment based on Technologies from the Knowledge Navigator video
Problem Definition: List five technologies from the Knowledge Navigator video
that were not around in 1987, but are in widespread use today.
Learning Objectives: Learn the technologies from the Knowledge Navigator video
Outcomes: After completion of this assignment students will able to learn various
Theory Concepts:
Videos
Apple produced several concept videos showcasing the idea. All of them featured a
tablet style computer with numerous advanced capabilities, including an excellent text-
to-speech system with no hint of "computerese", a gesture based interface resembling
the multi-touch interface later used on the iPhone and an equally powerful speech
understanding system, allowing the user to converse with the system via an animated
"butler" as the software agent.
In one vignette a university professor returns home and turns on his computer, in the
form of a tablet the size of a large-format book. The agent is a bow-tie wearing butler
who appears on the screen and informs him that he has several calls waiting. He ignores
most of these, from his mother, and instead uses the system to compile data for a talk
on deforestation in the Amazon Rainforest. While he is doing this, the computer informs
him that a colleague is calling, and they then exchange data through their machines
while holding a video based conversation.
In another such video, a young student uses a smaller handheld version of the system
to prompt him while he gives a class presentation on volcanoes, eventually sending a
movie of an exploding volcano to the video "blackboard". In a final installment a user
scans in a newspaper by placing it on the screen of the full-sized version, and then has
it help him learn to read by listening to him read the scanned results, and prompting
when he pauses.
Credits
The videos were funded and sponsored by Bud Colligan, Director of Apple's higher
education marketing group, written and creatively developed by Hugh Dubberly and
Doris Mitsch of Apple Creative Services, with technical and conceptual input from
Mike Liebhold of Apple's Advanced Technologies Group and advice from Alan Kay,
then an Apple Fellow.The videos were produced by The Kenwood Group in San
Francisco and directed by Randy Field. The director of photography was Bill Zarchy.
The post-production mix was done by Gary Clayton at Russian Hill Recording for The
Kenwood Group. The product industrial design was created by Gavin Ivester and Adam
Grosser of Apple design.
Samir Arora, a software engineer at Apple was involved in R&D on application
navigation and what was then called hypermedia. He wrote an important white paper
entitled “Information Navigation: The Future of Computing". While working for Apple
CEO John Sculley at the time, Arora built the technology to show fluid access to linked
data displayed in a friendly manner an emerging area of research at Apple.
The Knowledge Navigator video premiered in 1987 at Educom, the leading higher
education conference, in a keynote by John Sculley, with demos of multimedia,
hypertext and interactive learning directed by Bud Colligan.
The music featured in this video is Georg Anton Benda 's Harpsichord Concerto in C.
Reception
The astute bow tie wearing software agent in the video has been the center of quite a
few heated discussions in the domain of human–computer interaction. It was criticized
as being an unrealistic portrayal of the capacities of any software agent in the
foreseeable future, or even in a distant future.[citation needed] Some user interface
professionals like Ben Shneiderman of the University of Maryland, College Park have
also criticized its use of a human likeness for giving a misleading idea of the nature of
any interaction with a computer, present or future.
Some visions put forth by proponents of the Semantic Web have been likened to that
of the Knowledge Navigator by Marshall and Shipman, who argue that some of these
visions "ignore the difficulty of scaling knowledge-based systems to reason across
domains, like Apple's Knowledge Navigator," and conclude that, as of 2003, "scenarios
of the complexity of [a previously quoted] Knowledge Navigator-like approach to
interacting with people and things in the world seem unlikely."
Siri
The notion of Siri was firmly planted at Apple 25 years ago though “Knowledge
Navigator” with the voice of the assistant was only a concept prototype. In one of the
videos, a man is seen asking the assistant to search for an article published 5 years
before his time, the assistant finds it and tells the article being dated to 2006, and due
to this we can conclude that the video is set to take place in September 2011. In October
2011, Apple relaunched Siri, a voice activated personal assistant software vaguely
similar to that aspect of the Knowledge Navigator just a month after their initial
prediction.
The Making of Knowledge Navigator:
Apple made the Knowledge Navigator video for a keynote speech that John Sculley
gave at Educom (the premier college computer tradeshow and an important event in a
large market for Apple). Bud Colligan who was then running higher-education
marketing at Apple asked us to meet with John about the speech. John explained he
would show a couple examples of student projects using commercially available
software simulation packages and a couple university research projects Apple was
funding. He wanted three steps:
1. what students applere doing now
2. research that would soon move out of labs, and
3. a picture of the future of computing.
He asked us to suggest some ideas. apple suggested a couple approaches including a
short “science-fiction video.” John choose the video.
Working with Mike Liebhold (a researcher in Apple’s Advanced Technologies Group)
and Bud, apple came up with a list of key technologies to illustrate in the video, e.g.,
networked collaboration and shared simulations, intelligent agents, integrated multi-
media and hypertext. John then highlighted these technologies in his speech.
apple had about 6 weeks to write, shoot, and edit the video—and a budget of about
$60,000 for production. apple began with as much research as apple could do in a few
days. apple talked with Aaron Marcus and Paul Saffo. Stewart Brand’s book on the
“Media Lab” was also a source—as well as earlier visits to the Architecture Machine
Group. apple also read William Gibson’s “Neuromancer” and Verber Vinge’s “True
Names.” At Apple, Alan Kay, who was then an Apple Fellow, provided advice. Most
of the technical and conceptual input came from Mike Liebhold. apple collaborated
with Gavin Ivester in Apple’s Product Design Group who designed the “device” and
had a wooden model built in little more than a week. Doris Mitch who worked in my
group wrote the script. Randy Field directed the video, and the Kenwood Group handled
production.
The project had three management approval steps:
1. the concept of the science fiction video,
2. the key technology list, and
3. the script.
It moved quickly from script to shooting without a full storyboard—largely because
apple didn’t have time to make one. The only roughs were a few Polaroid snapshots of
the location, two sketches showing camera position and movement, and a few sketches
of the screen. apple showed up on location very early and shot for more than 12 hours.
(Completing the shoot within one day was necessary to stay within budget.) The
computer screens were developed over a few days on a video paint box. (This was
before Photoshop.)
The video form suggested the talking agent as a way to advance the “story” and explain
what the professor was doing. Without the talking agent, the professor would be silent
and pointing mysteriously at a screen. apple thought people would immediately
understand that the piece was science fiction because the computer agent converses
with the professor—something that only happened in Star Trek or Star Wars.
What is surprising is that the piece took on a life of its own. It spawned half a dozen or
more sequels within Apple, and several other companies made similar pieces. These
pieces were marketing materials. They supported the sale of computers by
suggesting that a company making them has a plan for the future. They were not
inventing new interface ideas. (The production cycles didn’t allow for that.) Instead,
they were about visualizing existing ideas—and pulling many of them together into a
reasonably coherent environment and scenario of use. A short while into the process of
making these videos, Alan Kay said, “The main question here is not is this technology
probable but is this the way apple want to use technology?” One effect of the video was
engendering a discussion (both inside Apple and outside) about what computers should
be like.
On another level, the videos became a sort of management tool. They suggested that
Apple had a vision of the future, and they prompted a popular internal myth that the
company was “inventing the future.”
Technologies
Apple's Siri
Siri is Apple's personal assistant for iOS, macOS, tvOS and watchOS devices that uses
voice recognition and is powered by artificial intelligence (AI).
Siri responds to users' spoken questions by speaking back to them through the device's
speaker and presenting relevant information on the home screen from certain apps, such
as Web Search or Calendar. The service also lets users dictate emails and text messages,
reads received emails and messages and performs a variety of other tasks.
Voice of Siri
ScanSoft, a software company that merged with Nuance Communications in 2005,
hired voiceover artist Susan Bennett that same year when the scheduled artist was
absent. Bennett recorded four hours of her voice each day for a month in a home
recording studio, and the sentences and phrases were linked together to create Siri's
voice. Until a friend emailed her in 2011, Bennett wasn't aware that she had become the
voice of Siri. Although Apple never acknowledged that Bennett was the original Siri
voice, audio experts at CNN confirmed it.
Karen Jacobsen, a voiceover artist known for her work on GPS systems, provided the
original Australian female voice. Jon Biggs, a former tech journalist, provided Siri's
British male voice.
Apple developed a new female voice for iOS 11 with deep learning technology by
recording hours of speech from hundreds of candidates.
Amazon's Alexa
Amazon Alexa, also known simply as Alexa, is a virtual assistant technology largely
based on a Polish speech synthesiser named Ivona, bought by Amazon in 2013. It was
first used in the Amazon Echo smart speaker and the Echo Dot, Echo Studio
and Amazon Tap speakers developed by Amazon Lab126. It is capable of voice
interaction, music playback, making to-do lists, setting alarms, streaming podcasts,
playing audiobooks, and providing weather, traffic, sports, and other real-time
information, such as news. Alexa can also control several smart devices using itself as
a home automation system. Users are able to extend the Alexa capabilities by installing
"skills" (additional functionality developed by third-party vendors, in other settings
more commonly called apps) such as weather programs and audio features. It uses
automatic speech recognition, natural language processing, and other forms of weak
AI to perform these tasks.
Most devices with Alexa allow users to activate the device using a wake-word (such as
Alexa or Amazon); other devices (such as the Amazon mobile app on
iOS or Android and Amazon Dash Wand) require the user to click a button to activate
Alexa's listening mode, although, some phones also allow a user to say a command,
such as "Alexa" or "Alexa wake".
Functions
Alexa can perform a number of preset functions out-of-the-box such as set timers, share
the current weather, create lists, access Wikipedia articles, and many more things. Users
say a designated "wake word" (the default is simply "Alexa") to alert an Alexa-enabled
device of an ensuing function command. Alexa listens for the command and performs
the appropriate function, or skill, to answer a question or command. When questions
are asked, Alexa converts sound waves into text which allows it to gather information
from various sources. Behind the scenes, the data gathered is then sometimes
passed to a variety of sources including WolframAlpha, iMDB, AccuWeather,
Yelp, Wikipedia, and others to generate suitable and accurate answers. Alexa-
supported devices can stream music
from the owner's Amazon Music accounts and have built-in support
for Pandora and Spotify accounts. Alexa can play music from streaming services such
as Apple Music and Google Play Music from a phone or tablet.
In addition to performing pre-set functions, Alexa can also perform additional functions
through third-party skills that users can enable. Some of the most popular Alexa skills
in 2018 included "Question of the Day" and "National Geographic Geo Quiz" for trivia;
"TuneIn Live" to listen to live sporting events and news stations; "Big Sky" for hyper
local weather updates; "Sleep and Relaxation Sounds" for listening to calming sounds;
"Sesame Street" for children's entertainment; and "Fitbit" for Fitbit users who want to
check in on their health stats In 2019, Apple, Google, Amazon, and Zigbee Alliance
announced a partnership to make their smart home products work together.
Microsoft's Cortana
Cortana is a virtual assistant developed by Microsoft that uses the Bing search engine
to perform tasks such as setting reminders and answering questions for the user.
Cortana is currently available
in English, Portuguese, French, German, Italian, Spanish, Chinese, and Japanese
language editions, depending on the software platform and region in which it is used.
Microsoft began reducing the prevalence of Cortana and converting it from an assistant
into different software integrations in 2019. It was split from the Windows 10 search
bar in April 2019.In January 2020, the Cortana mobile app was removed from certain
markets, and on March 31, 2021, the Cortana mobile app was shut down globally.
Functionality
Cortana can set reminders, recognize natural voice without the requirement for
keyboard input, and answer questions using information from the Bing search
engine (For example, current weather and traffic conditions, sports scores,
biographies).Searches using Windows 10 are made only with the Microsoft
Bing search engine, and all links will open with Microsoft Edge, except when a screen
reader such as Narrator is being used, where the links will open in Internet Explorer.
Windows Phone 8.1's universal Bing SmartSearch features are incorporated into
Cortana, which replaces the previous Bing Search app, which was activated when a user
presses the "Search" button on their device. Cortana includes a music recognition
service. Cortana can simulate rolling dice and flipping a
coin. Cortana's "Concert Watch" monitors Bing searches to determine the bands or
musicians that interest the user. It integrates with the Microsoft Band watch band for
Windows Phone devices if connected via Bluetooth, it can make reminders and phone
notifications.
Since the Lumia Denim mobile phone series, launched in October 2014, active listening
was added to Cortana enabling it to be invoked with the phrase: "Hey Cortana". It can
then be controlled as usual.Some devices from the United Kingdom by O2 received the
Lumia Denim update without the feature, but this was later clarified as a bug and
Microsoft has since fixed it.
Cortana integrates with services such as Foursquare to provide restaurant and local
attraction recommendations and LIFX to control smart light bulbs.
Google's Assistant
Samsung’s Bixby digital assistant lets you control your smartphone and select
connected accessories. You can open apps, check the weather, play music, toggle
Bluetooth, and much more. You’ll find everything you need to know about
the Google rival below, including how to access it, the features it offers, and which
devices it’s available on.
The most interesting and helpful component is of course Bixby Voice, which lets you
use voice commands to get stuff done. It works with all Samsung apps and a few third-
party apps, including Instagram, Gmail, Facebook, and YouTube.
With Voice you can send text messages, check sports scores, turn down screen
brightness, check your calendar, launch apps, and more. The tech can also read out your
latest incoming messages, and flip between male and female versions. Like Google
Assistant, Bixby can handle some more complicated two-step commands, such as
creating an album with your vacation photos and sharing it with a friend.
Program:
Output:
Assignment No. 2
Title : Assignment based on GOMS (Goals, Operators, Methods and Selection rules)
modelling technique
scenario
Learning Objectives: Learn the GOMS (Goals, Operators, Methods and Selection
Outcomes: After completion of this assignment students will able to learn the GOMS
Theory Concepts:
Goals, operators, methods, and selection rules is a method derived from human
computer interaction (HCI) and constructs a description of human performance. The
level of granularity will vary based on the needs of the analysis. The goal is what the
user wants to accomplish. The operator is what the user does to accomplish the goal.
The method is a series of operators that are used to accomplish the goal. Selection rules
are used if there are multiple methods, to determine how one was selected over the
others.
Uses
Advantages
Card, Moran, and Newell developed GOMS in the 1980. They developed two
versions:
KLM-GOMS: A key stroke level model, this lecture. This is the simplest GOMS.
Card, Moran, and Newell (The Keystroke-level Model for User Performance with
Interactive Systems, Communications of the ACM, 23:396-410, 1980) measured the
time for users to perform a series of gestures on the computer. They discovered a
fundamental principle:
The total time to perform a sequence of gestures is the sum on the individual gestures.
A lot is implied in this statement. The most important is that there are fundamental
gestures. Individual users perform the fundamental gestures in different times; the
researchers attempted to determine typical values:
H = 0.4 sec Homing: The time for user to move hands from keyboard to mouse
M = 1.35 sec Mental: The time for the user to prepare for the next step
R=? Responding: The time for the computer to respond to the user inputs.
The variation of the timings across users can be as much as 100%, for example an expert
typist can type 200 words per minute = 0.06 sec (Note that the measurement assumes 5
characters/words). So the model cannot accurately predicate the response time of an
individual user. Chris Blazek and I have measured these variables for a web user and
they are surprisingly accurate. Even without precise gesture times for a specific user,
the model can be used to determine times for expert users and compare across
interfaces.
We calculate the total response time by listing the individual gesture and summing their
individual execution time. The difficult part is determining where a mental preparation,
M, occurs. The researchers determined heuristics rules for placing mental operations:
Rule 5: Deletion of overlapped Ms: Do not count any portion of an M that overlaps
with a command response. (This is the reason that a responsive interface only needs to
respond in a second.)
Interface consists of two radial buttons to choose between centimeters or inches and
two text fields; one text field to type the 4 characters for distance and the other to display
the result.
Sequence of Tasks:
Design 2: A GUI
The GUI has two measuring sticks with pointers one in inches other in centimeters.
When the user moves the pointer on one stick to the correct distance the other pointer
correctly points to the corresponding number in the converted unit. There is only room
on the sticks to display an order of magnitude, in other words the user must use buttons
to expand or compress the scale. So if the distance that the user wants to convert is not
on the screen, the users must first expand the scale, move the scale and then compress
the scale to refine the distance.
Our first analysis assumes that the distance that the user wishes to convert is displayed
on the yard stick so that the user does not have to expand and compress the scales.
Sequence of gestures:
1. Move hand to mouse
2. Move mouse to pointer
3. Click mouse
4. Move pointer, dragging the pointer to a new distance
5. Release pointer
A message box appears asking the user to input the units (cm/inches), the distance (4
digits) followed by return/enter.
That is an improvement over the dialog design, but still short of the ideal design.
The studious reader will declare that we could eliminate a key stroke by replacing the
enter keystroke with the units then the keystrokes are MKKKKMK = 3.7. But this is
still longer the minimum time.
Another student will say but maybe we must input all four character; the units are a
necessary part of the message and the user will have to think about it before entering
the units. If I give you a design that does not require all four keystrokes then the last
design is not ideal.
A box appears with text field for entering 3 digits and automatically there appears two
output text fields, one text field conversion in centimeter the other in inches.
With respect to minimum user time the solution is ideal, but it may not be the solution
we prefer. There is more to design then minimum user time and information efficiency.
Problem Definition: Using your observations from your small user study and your
knowledge of Web Design guidelines and general UI design
principles, Critique two interfaces of any two educational
institute and make suggestions for improvement.
Learning Objectives: Learn the Knowledge of Web Design and UI design principles
Outcomes: After completion of this assignment students will able to give suggestions
for improvement for Web Design and UI design for any website.
Theory Concepts:
An effective website design should fulfil its intended function by conveying its
particular message whilst simultaneously engaging the visitor. Several factors such as
consistency, colours, typography, imagery, simplicity, and functionality contribute to
good website design.
When designing a website there are many key factors that will contribute to how it is
perceived. A well-designed website can help build trust and guide visitors to take
action. Creating a great user experience involves making sure your website design is
optimised for usability (form and aesthetics) and how easy is it to use (functionality).
Below are some guidelines that will help you when considering your next web project.
1. WEBSITE PURPOSE
Your website needs to accommodate the needs of the user. Having a simple clear
intention on all pages will help the user interact with what you have to offer. What is
the purpose of your website? Are you imparting practical information like a ‘How
to guide’? Is it an entertainment website like sports coverage or are you selling a
product to the user? There are many different purposes that websites may have but
there are core purposes common to all websites;
1. Describing Expertise
2. Building Your Reputation
3. Generating Leads
4. Sales and After Care
2. SIMPLICITY
Simplicity is the best way to go when considering the user experience and the usability
of your website. Below are ways to achieve simplicity through design.
Colour
Colour has the power to communicate messages and evoke emotional responses.
Finding a colour palette that fits your brand will allow you to influence your customer’s
behaviour towards your brand. Keep the colour selection limited to less than 5 colours.
Complementary colours work very well. Pleasing colour combinations increase
customer engagement and make the user feel good.
Type
Typography has an important role to play on your website. It commands attention and
works as the visual interpretation of the brand voice. Typefaces should be legible and
only use a maximum of 3 different fonts on the website.
Imagery
Imagery is every visual aspect used within communications. This includes still
photography, illustration, video and all forms of graphics. All imagery should be
expressive and capture the spirit of the company and act as the embodiment of their
brand personality. Most of the initial information we consume on websites is visual and
as a first impression, it is important that high-quality images are used to form an
impression of professionalism and credibility in the visitors’ minds.
3. NAVIGATION
Navigation is the wayfinding system used on websites where visitors interact and find
what they are looking for. Website navigation is key to retaining visitors. If the website
navigation is confusing visitors will give up and find what they need elsewhere.
Keeping navigation simple, intuitive and consistent on every page is key.
5. VISUAL HIERARCHY
Visual hierarchy is the arrangement of elements in order of importance. This is done
either by size, colour, imagery, contrast, typography, whitespace, texture and style. One
of the most important functions of visual hierarchy is to establish a focal point; this
shows visitors where the most important information is.
6. CONTENT
An effective website has both great design and great content. Using compelling
language great content can attract and influence visitors by converting them into
customers.
8. LOAD TIME
Waiting for a website to load will lose visitors. Nearly half of web visitors expect a site
to load in 2 seconds or less and they will potentially leave a site that isn’t loaded within
3 seconds. Optimising image sizes will help load your site faster.
9. MOBILE FRIENDLY
More people are using their phones or other devices to browse the web. It is important
to consider building your website with a responsive layout where your website can
adjust to different screens.
If you want to venture into an inspirational type of post, check out our list of incredibly
creative UI design examples.
This can manifest itself in many different ways in a screen design. Ease of use tends to
be closely related to high standards of usability, which can be difficult to live up to even
to the most experienced among us.
A good example is the navigation design, which is the backbone of any product but
represents a challenging aspect of UI. You want the navigation to feel effortless, to have
users driving down the right roads with no need for signs to help them. The more content
the product holds, the tougher it is to create a system that holds it all together in a way
that makes it easy for users to navigate and discover new areas of the product.
Users that encounter any product for the first time have to explore a bit and discover
the primary features, sometimes waiting a bit to advance onto the secondary ones. This
first encounter is crucial, because it sets the tone for the experience and tells
users what to expect. Their first impression is likely to dictate if they stick around or if
they give up and abandon the product right there on the spot.
One of the most difficult things about UI design is that everything depends. The nature
of the product will dictate what navigation is more appropriate, the users will affect the
way that information is categorized and presented. The right UI pattern will depend on
the function and the people using the product. Unfortunately, there’s never a one-size-
fits-all approach to UI design. Part of the art of UI design is seeing the context and using
that information to create an interface that still lives up to high standards of usability.
There’s a right balance of power that users want. They want to feel in control, to have
freedom to approach tasks in their own way. With that said, they also don’t want too
much control, which can lead to overwhelmed users that quickly grow tired of having
to make so many decisions. That is called the paradox of choice. When faced with too
much freedom, most users stop enjoying the experience and instead resent the
responsibility. Choosing, after all, requires cognitive effort.
This requires a balance that those in the gaming industry are intimately familiar with.
Gamers enjoy choices, but overdoing it can ruin the game experience. Game
UI design is all about giving users just the right amount of power.
Users want the freedom to do what they want as well as freedom from bad
consequences. In UI design, that means giving them the power to do and undo things,
so users don’t ever come to regret what they chose to do with the product.
That’s why UI designers operate within a certain margin of control that they pass on to
users. They narrow down which parts of the product and the experience can be
customized, identifying areas where users can create their own take on the design. A
color change may sound silly to some, but it makes users happy to have a choice in the
interface.
More complex stuff, like changing the general hierarchy of information or customizing
highly technical aspects of the product – those don’t fall on the user to decide. People
are happy to be walked to success, so they don’t need to worry about the complex or
small. They want to focus on the task, on having fun. As UI designers, it’s our job to
help them get there.
A solid example of this can be seen with any dashboard design, where complex
information is broken down and made easy to digest. Even if you can customize the
dashboard itself, the soul of the design will remain in order to get the main job done.
But what makes a layout UI work? How do designers know where each component
goes, and how it all fits together?
The answer is a combination of factors that UI designers take into account. First, there’s
the general rule of proximity of elements and visual hierarchy. This is about making the
important things bigger and brighter, letting users know right away that this is what
they should be focusing on. The hierarchy is a way of communicating to the user where
their eye should go, what they should do.
The right hierarchy has the power to make users understand the content immediately,
without using a single word. The proximity between elements plays a similar role, with
components in close proximity being somehow related or closely connected.
Whitespace also plays an important role in the layout design. Most people who are only
beginning to learn about UI design often underestimate the importance of whitespace
or how much of it they’ll need to create a good visual hierarchy. The truly skilled
designers use that empty space to give the user’s eye some relief and let the component
guide their gaze through the screen. This can be taken to an extreme with the trend of
minimalist website design.
4. Offer a consistent interface
UI designers know that maintaining a consistent design is important, for multiple
reasons. When the word “consistent” is thrown around in the world of web design, it
applies to both the visual and the interactions. The product needs to offer the same icons
and elements, no matter its size or how much content it holds. That means that once the
design team has settled on a visual identity, the product can’t stray from it.
The consistency is important because it will significantly help users learn their way
around the product. That first learning curve is unavoidable for a brand new experience,
but UI designers can shorten it. Ultimately, you want users to recognize the individual
components after having seen them once.
Buttons, for example. After using the product just for a little bit, users should recognize
primary and positive buttons from secondary ones. Users are already making the effort
to learn how the product works and what it does – don’t make them learn what 9
different buttons mean. This means that buttons should not only look the same, they
need to behave the same.
When it comes to the consistency of UI design, you want to be predictable. You want
users to know what that button will do without the need to press it. A good example is
having consistent button states, so your users know exactly how buttons behave
throughout the entire product.
These metaphors can be a handy way to communicate an abstract concept, putting into
more concrete terms. Without using any words, designers can use their visual skills to
convey ideas and the product becomes much easier to understand.
User personas capture the idea of the final user, giving them a face and offering details
of their lives and what they want. Originally created by the marketing industry, they are
just as helpful for UI designers. Despite it being a fictitious profile of a person that
doesn’t exist, the idea and the group of people that it represents are very much real. It
gives the design team clarity on what the users want, what they experience and their
ultimate goals.
On a similar note, mental models are also crucial. Rather than capturing the ideal user
they capture how those users think. It showcases their reasoning, which can be very
helpful in UI design. More often than not, when screens or elements don’t perform well
it’s because they simply don’t respect the user’s mental models – which means users
don’t get it. For them, that just doesn’t make sense.
The same can be said for other materials, such as user flows or user scenarios. All of
these materials add value to the design process, resulting in a tailored product that is
more likely to succeed.
Slowly and over time, designers will build on this grey base and add more details. It’s
true that some design teams start testing very early, even when wireframes are nothing
but a bunch of boxes. Regardless, this grey design grows as the designer adds colors,
details and actual content.
This user feedback and context can come in many forms. One of the most commonly
used is microinteractions, which tell the user that things are clickable or that the system
is working behind the screen. A loading icon that holds a brief interaction is the perfect
example.
You want the feedback to be instantaneous, so there’s no room for confusion. Users
don’t like uncertainty and using feedback can be a way to have much more efficient
communication between user and product. With something as simple as a button
moving slightly up when the cursor hovers above it, UI designers can tell users that
button can be clicked or that the element is responsive and dynamic.
These simple cues are something UI designers have grown to do almost instinctively.
They know that users need this sort of context in order for the product to shine, and so
they look for these opportunities everywhere. These little details matter and make the
entire experience better for users.
A classic example of simple but crucial feedback are the different states of key
components, such as toggle UIs, dropdown UIs as well as the feedback from the well-
loved card UI design pattern. If you’re interested in specific components, we also
recommend you read our post on the debated choice between radio buttons vs
checkboxes.
Starting off as a bunch of boxes and tones of grey, UI designers will use the design
materials like user personas to create a wireframe that fits the user. This is about
capturing the general functionality of the product, laying the foundation of the bare
bones. Things like the navigation, main pieces of content and the representation of the
primary features – they all play a part in the wireframing process.
As the team begins to test the general usability and performance of the wireframe, a
cycle emerges. The wireframe is tested and the results will dictate what parts need to
be improved or completely changed. One of the best things about wireframes is that
putting them together quickly is possible, bringing the ability to quickly change course
if need be. Feel free to check out our guide to accessibility design for more on that.
Truly skilled UI designers are all about wireframing. They understand the process and
what information to use, which factors influence the design. They go out of their way
to validate the wireframe at every turn, before a new layer of detail is added. Slowly,
the wireframe will give way to a high-fidelity prototype, where all the final visuals are
represented.
10. Get familiar with user testing and the world of usability
Usability can mean different things to different design teams. It’s often the case that
most designers will associate user testing to the performance of the design – in terms
of how many users can complete a task under X time. To others, the testing takes a
bigger meaning, representing the very point-of-view of the users, with the data being
the only way to know what users truly want.
Ultimately, user testing is done for an extended period of time. Starting in the
wireframing stage and going all the way to the release of the product (sometimes even
further). Designers will invest real time and effort into testing, simply because it pays
off. Any changes that the testing leads to are welcome, because they represent
improvement done for little cost. If these improvements needed to be done much later
on the project, they would have come in the form of delays and absurd costs.
The methods can vary due to how many alternatives there are out there now. From
unmoderated tests that enjoy hundreds of participants to moderated interviews and
observation sessions – there’s a right path for every team no matter the budget and
time restraint.
Shopping carts are a key part of any ecommerce. But what makes a shopping cart
good? And what can we do to improve its conversion? Read on and find out!
Learning Objectives: Learn the Knowledge of interactive web page design using
HTML, CSS,JavaScript and Document Object Model with
JavaScript and CSS.
Theory Concepts:
Features:
This page consists of a centered container with 3 tabs each for showing a text, an
image and a youtube video. A div containing three Buttons is used as a tab bar and
pressing each button displays the corresponding tab. Only one tab should be displayed
at a time The button showing the current tab must remain highlighted from the
moment your page is loaded.
Main container should have a minimum width of 300px and should scale with the
windows size.
It should remain centered both vertically and horizontally. All tabs should have 10-20
px of padding. Individual tabs should be the same height regardless of the content.
If you need help for centering your elements please check this guide:
Text tab:
Show the embedded image from an external URL. Image should be both vertically
and horizontally centered. It should maintain the aspect ratio and resize to fill the
container horizontally. Use overflow property to keep the image within the container.
Video tab:
Display a youtube video using iFrame html tag. You can get a pre written tag using
Youtube embedding options. Video should fill the tab vertically and horizontally.
You should choose a google web font from for your page and import it either using
link tag in your html or using @import directly in your css.
You are free to choose your own color scheme, button style and fonts. Try to make it
beautiful and get creative!
Overall Hints:
- CSS should control the visibility of the different elements via the 'is-visible' class
and the display property.
- Your JavaScript needs to implement onClick listeners for the buttons, and set the
correct membership in the is-visible class. Loop through the example-
content elements to do this.
Structure of DOM: DOM can be thought of as a Tree or Forest(more than one tree).
The term structure model is sometimes used to describe the tree-like representation of
a document. Each branch of the tree ends in a node, and each node contains objects
Event listeners can be added to nodes and triggered on an occurrence of a given event.
One important property of DOM structure models is structural isomorphism: if any
two DOM implementations are used to create a representation of the same document,
they will create the same structure model, with precisely the same objects and
relationships.
Properties of DOM: Let’s see the properties of the document object that can be
accessed and modified by the document object.
Window Object: Window Object is object of the browser which is always at top of the
hierarchy. It is like an API that is used to set and access all the properties and methods
of the browser. It is automatically created by the browser.
Title : Assignment based on knowledge of user interfaces using Javascript, CSS and
HTML
Problem Definition: Develop interactive user interfaces using Javascript, CSS and
HTML, specifically:
a. implementation of form-based data entry, input groups, and
button elements using the Bootstrap library.
b. use of responsive web design (RWD) principles,
c. implementing JavaScript communication between the input
forms and a custom visualization component
Theory Concepts:
HTML
At the user interface level, the platform provides a rich visual editor that allows web
interfaces to be composed by dragging and dropping. Instead of purely writing HTML,
developers use visual widgets. These widgets are wrapped and are easy to reuse just by
dragging and dropping without everyone needing to understand how they are built:
The core visual widgets represent very closely what developers are used to with
HTML: a div, an input, a button, a table, and so forth. All of them have a direct -
and well known - HTML representation. For example, dragging a “Container”
generates a div.
Custom HTML widgets can be used to include whatever HTML is needed. An
example is a CMS that loads dynamic, database-stored HTML content in a page.
Widgets can be composed and wrapped in “web blocks,” which are similar to
user controls, and reusable layouts with “placeholders,” which are similar to
Master Pages with “holes” that will be filled in when instantiated.
All widgets can be customized in the properties box via “Extended Properties,"
which will be directly translated to HTML attributes. This includes HTML tags
that are not supported today in the base HTML definition. For example, if
someone wants to use custom “data-” attributes, they can just add them in.
All widgets have properties such as RuntimeId (HTML attribute ID) or Style
(class), which allow them to be used in/with standard JavaScript or CSS.
All widgets have a well-defined API that is tracked by the platform to ensure
that they are being properly used across all applications that reuse it.
In summary, the visual editor is very similar to a view templating system, such as .NET
ASPX, Java JSP or Ruby ERBs, where users define the HTML and include dynamic
model/presenter bound expressions.
JavaScript
OutSystems provides a very simple to use AJAX mechanism. However, developers can
also use JavaScript extensively to customize how users interact with their applications,
to create client side custom validations and dynamic behaviors, or even to create custom,
very specific, AJAX interactions. For example, each application can have an
application-wide defined JavaScript file or set of files included in resources. Page-
specific JavaScript can also be defined.
OutSystems includes jQuery by default in all applications. But, developers also have the
option to include their own JavaScript frameworks (prototype, jQuery, jQueryUI, dojo)
and use them throughout applications just as they would in any HTML page.
Many JavaScript-based widgets, such as jQuery plugins, have already been packaged
into easy to reuse web blocks by OutSystems Community members and published
to OutSystems Forge. There are examples for kParallax, Drag and Drop lists, Table
freeze cells, Touch Drag and Drop, Sliders, intro.js, or the well known Google Maps.
Even some of the OutSystems built-in widgets are a mix of JavaScript, JSON and back-
end logic. For example, the OutSystems Charting widget is a wrapper over the well-
known Highcharts library. Developers can use the properties exposed by the
OutSystems widget, or use the full JSON API provided by HighCharts to configure the
widget.
This is an example of the jVectorMap JavaScript library, wrapped and reused in the
visual designer to display website access metrics over a world map:
This is an example of a jquery slideshow plugin wrapped and reused in the visual
designer:
CSS
OutSystem UIs are purely CSS3-based. A predefined set of “themes,” a mix of CSS and
layout templates, can be used in applications. However, developers can reuse existing
CSS or create their own. A common example is to reuse bootstrap, which is tweaked so
that its grid system is reused by the OutSystems visual editor to drag and drop page
layouts, instead of having to manually input the CSS columns for every element.
Themes are hierarchical, which means that there is a CSS hierarchy in OutSystems.
Developers can define an application-wide CSS in one theme and redefine only parts of
it for a particular section of an application. Themes can also be reused by applications
when there is a standard style guide.
The built-in CSS text editor supports autocomplete, similar to that of Firebug or Chrome
Inspector, and immediately previews the results in the page without having to
recompile/redeploy applications.
Common CSS styling properties including padding, margin, color, border, and shadow,
can also be adjusted from directly within the IDE using the visual styles editor panel.
This example shows information overlaid in maps using a user interface external
component:
This is an example of a typical page in the website that provides the visualization of
information in dynamic and static charts:
Wodify, a SaaS solution for Cross Fit Gyms, is built with OutSystems and currently
supports more than 200,000 users around the globe. Although most of the functionality
in Wodify is created with OutSystems built-in user interface widgets, it is a great
example of how OutSystems interfaces can be freely styled using CSS to achieve a
consistent look and feel and support several devices and form factors.
User Interface (UI) defines the way humans interact with the information systems.
In Layman’s term, User Interface (UI) is a series of pages, screens, buttons, forms and
other visual elements that are used to interact with the device. Every app and every
website has a user interface.
The user interface property is used to change any element into one of several standard
user interface elements. In this article we will discuss the following user interface
property:
resize
outline-offset
resize Property: The resize property is used to resize a box by user. This property does
not apply to inline elements or block elements where overflow is visible. In this
property, overflow must be set to “scroll”, “auto”, or “hidden”.
Syntax:
resize: horizontal|vertical|both;
horizontal: This property is used to resize the width of the element.
Syntax:
resize: horizontal;
To resize: Click and drag the bottom right corner of this div element.
vertical: This property is used to resize the height of the element.
Syntax:
resize: vertical;
To resize: Click and drag the bottom right corner of this div element.
both: This property is used to resize both the height and width of the element.
Syntax:
resize: both;
To resize: Click and drag the bottom right corner of this div element.
Supported Browsers: The browser supported by resize property are listed below:
Apple Safari 4.0
Google Chrome 4.0
Firefox 5.0 4.0 -moz-
Opera 15.0
Internet Explorer Not Supported
outline-offset: The outline-offset property in CSS is used to set the amount of space
between an outline and the edge or border of an element. The space between the element
and its outline is transparent.
Syntax:
outline-offset: length;
Note: Length is the width of the space between the element and its outline.
Outcomes: After completion of this assignment students will able to make a Table
Lamp in Blender – A 3 D modeling software
Theory Concepts:
easy to use
sturdy
durable
customizable
Steps to Design Your Own 3D Printed Table Lamp
Get started by downloading the table lamp kit for the following 3D modeling
programs:
Blender
Step 2:
We make use of a standard component to attach the shade to the base. This standard
component will be inserted and glued to the lamp fitting (blue) by us. It is important
for you, as the designer, to be aware of this and make use of this blue part in your
design.
Step 3: Safety Advice
Since we’re dealing with electricity, you have to make sure to include a spherical
zone around the light bulb of Ø 6 cm / Ø 2.4 inch. This zone needs to remain
completely open (hollow); it cannot contain any material.The design kit contains a Ø
6 cm / Ø 2.4 inch sphere to perform this safety check. Place the sphere inside your
lamp shade and make sure they don’t intersect.
For the price of the table lamp, you’re allowed to use a maximum diameter of 13 cm
/ 5.12 inch and the same for the height of your lamp shade. These also happen to be
the ideal dimensions regarding stability, weight, and aesthetics.
Step 5: Upload & Order
Starting from here, you can proceed with your order. After ordering, our 3D printers
will start building your unique lamp.
Step 6:
Program:
Output: