Theory of Operation
media devices have great computation power and connectivity to provide users
with a vast amount of visual data, including web pages, maps, pictures,
spreadsheets and other information content. The emergence of high-resolution
active TFT Liquid Crystal and OLED displays has vastly increased the number and
variety of smart hand-held devices with information displays. Such devices now
include smartphones, gaming devices and a variety of hand-held computers, GPS
based maps and others.
The display's SIZE will always remain small due
to the limitations inherent to the hand-held device's small form factor. This
fact leads to the challenge of displaying the contents of a large stored
virtual display on the device's small screen. The following pictures
illustrates the relations between the virtual display and the actual device's
relation between the stored virtual display (left) and the actual hand-held
device's display (right). The yellow bordered area on the virtual display is
all the device's display can show (without image reduction).
personal media device must employ view navigation methods that will enable the
user to scroll the large stored virtual display. Traditionally, this function
was done by keyboards, and in recent years view navigation by touch screen
became very popular. And yet, a major challenge remains - how to enable
single-hand operation of the device. For example, while you can navigate a view
using touch screen with one hand, it becomes awkward when many live links are
displayed, which you can inadvertently activate while scrolling the page with
your finger. The RotoView technology was invented to resolve this problem. It
is intended to complement multi-touch devices, as well as to complement
keyboard navigation in devices like electronic book readers where touch screen
glare is not desired.
||RotoView-enabled devices allow the user to scroll any
virtual display just by tilting the device. This reduces the need for pressing
switches or using your fingers to navigate the display.
||RotoView's can use any built-in orientation sensor,
like accelerometers, gyroscopes, camera-based tilt sensors and more.
Our patented RotoView technology
controls the view navigation based on changes in the device tilt and related
movements. RotoView defines two modes of operation: fixed mode; and view
navigation mode. Fixed mode is when RotoView is disengaged and the device view
is fixed (or controlled by other view navigation means, such as buttons and
touch-screen). During navigation mode the hand-held device senses changes in
its tilt and movement to determine the view navigation (or scrolling) along the
x and y axes (up-down, left-right) of the virtural display.
Orientation sensors have been used
for many years in virtual reality systems and in a variety of three-dimensional
pointers and 3D mice. Such sensors include accelerometers, gyroscopes, magnetic
sensors, camera-based orientation sensor, and many more. Typical of these
sensors is the accelerometer. Most smartphones
manufactured today already include an accelerometer and software driver to
auto-select "portrait" or "landscape" display mode. Since we invented
tilt-based view navigation many years before the emergence of the modern smart
phone, we first introduced RotoView technology using our
board. Nowadays, the accelerometer is embedded within the smart media
device and the application designer simply interfaces to it via the operating
system. In fact, RotoView can be quickly implemented within the device's operating
systems and enhance all other applications running on the device.
The user's hand
movement cannot be restricted only to tilt change - all hand movements include
some lateral movements with acceleration components that add to the sensor's
measurements. As a result, precise control of tilt-based view navigation by
accelerometers is not efficient if the system rigidly responds to the sensor
data. Gyroscopes sensors, which are expected to proliferate in smart media
devices in the near future, provide better tilt change detection compared to
the accelerometer since it does not respond to linear movements. Furthermore,
the user often needs to perform a fast and substantial scrolling of the view to
reach the area of interest and then to follow delicately to the final
destination. This is one of the reason for the success of the the "flick"
gesture in today's multi-touch systems. To achieve similar performance and
overcome sensor limitations, RotoView technology employs our Non-linear Dynamic Response (NLDR) algorithms, to quickly
and intuitively responds to the user's orientation changes. This creates a
closed control loop that alleviates the need for exact linear relation between
the orientation changes and the resulting display navigation.
NLDR algorithms exhibit the following main features:
- Response curves providing
non-linear relation between the amount of tilt or hand movement and the amount
(or rate) of view navigation.
- Selection of different stored
response curves for use by different applications.
- The response curves may further
change dynamically during the navigation process.
While in Navigation mode, response
to the re-orientations of the device may change dynamically as mentioned above.
For example, at the start of the navigation, the response is fairly coarse to
bring the display to the general area. After a few seconds within Navigation
mode, the response automatically becomes more refined, to allow exact placement
of the display. As a result, RotoView does not require an exact correlation
between orientation changes and actual navigation of the display, which allows
the use of relatively low cost coarse sensors to determine the orientation
In addition to the use of a switch or touch screen command
to activate Navigation mode, a RotoView implementation may activate Navigation
mode by tapping on the enclosure of the hand held device. Another embodiment
activates the Navigation mode by a specific hand gesture. Both of these
activation options are well suited for single hand operation.
RotoView stores a trail of virtual display navigation states during the view
navigation so that the system can be returned to the fixed mode with any of the
stored states. The trail is used when a hand gesture command to exit navigation
view mode is detected. The personal media display is set to the state just
prior to the start of the movement associated with the gesture.
Similarly, a stored display state of the trail can be selected by the user to
"undo" any inadvertent view navigation that occurs during view navigation
Please click here to
review additional issues relating to the
user interface experience with RotoView.