Quantcast
Channel: Adobe & jQuery » Kin Blas
Viewing all articles
Browse latest Browse all 10

The Current State of (Touch) Events

$
0
0

One of the main goals of the jQuery Mobile project is to allow developers to extend the reach of their application content to a wide variety of browsers on different devices. If you take a look at some of the web-enabled devices that are currently out on the market, you will see that there are many different means being employed to allow users to navigate and click/activate elements within a web page.

Older or low-end devices, with no touch screen support, usually have hardware buttons, scroll-wheels, nubs/joysticks, or track-balls. Devices that use buttons and scroll-wheels usually scroll the page, highlighting actionable (clickable) items along the way. When the user activates the highlighted element on screen, a click event is usually dispatched to trigger any actions associated with that element. Devices that use nubs/joysticks or track-balls typically display a cursor on screen, and usually dispatch mouse and click events just like the desktop. The main point to note here is that the browsers on these devices are using the standard desktop mouse events to trigger actions on a web page.

Newer or high-end devices, now rely on touch screens as the main means for scrolling and manipulating items within the display. Although there are many options for browsing the web on these devices, a growing number of  them are deploying WebKit based browsers as the default.

One of the common misconceptions I hear quite frequently is the assumption that because all these browsers are all based on WebKit that they all share the same features and work identically. The reallity is that WebKit is just a rendering engine with a set of APIs that allow developers to write browsers on top of it to communicate and drive the rendering of the page. It doesn’t know how to load a file, it doesn’t know what hardware/platform it is running on, what graphics library is being used to render objects to the screen, or even how to deal with OS level events.  All of these things are what browsers, built on top of WebKit, need to provide, and this is what is going to make things interesting and challenging for the next few years. All of these WebKit based browsers are either written entirely by the device vendor, or supplied with the OS, but modified by vendors to work better with their hardware and/or add/remove browser and Web Kit features.

All of these factors create a mobile environment where there are lots of WebKit based browsers, but the features they support, performance, and user experience all vary quite a bit.

When Safari for mobile hit the scene, via iOS, it introduced a set of new touch events:

  • touchstart
  • touchmove
  • touchend
  • touchcancel

These are the DOM-level events that Safari mobile dispatches in real-time as the user places one or more fingers (touches) on the screen and drags them around. The big problem is that most of the pages on the web assume the use of mouse and click events. To keep most web pages functional, mobile Safari dispatches synthesized mouse events after the user lifts his finger so the web page receives a series of mouse events in the following order:

  • mouseover
  • mousemove
  • mousedown
  • mouseup
  • click

At this point you may be asking “why didn’t the Safari folks just use mouse events instead of creating a whole new set of events?” I think the answer has to do with the fact that the iOS devices support multi-touch. On traditional computing platforms there was always a notion of a single mouse with a main (left) button and maybe center and right buttons. Although you could click and hold down these buttons at different times to generate multiple overlapping mousedown and mouseup events, they were still tied to a single source for the move/positioning information.  Also, folks have become accustomed to the fact that these buttons do specific actions. For example right mouse buttons are typically associated with bringing up a context menu, etc. With the new multi-touch events, not only can you have more than 3 touches, each touch generates its own set of touchstart, touchmove, and touchend events, and in some cases touchmoves could be coalesced into single events if more than one touch shares the same target. It suffices to say that the newer touch events are fundamentally different in behavior and perhaps the Safari folks did not want to break or modify the well established mouse usage and behavioral model.

There are a few interesting things to note about touch events on iOS:

  • Only one event for each mouse event type is dispatched.
  • Mouse events are dispatched approximately 300+ milliseconds after the user lifts his finger.
  • Mouse events are not dispatched if the touch results in the screen scrolling. Scroll events are also not dispatched until after the user lifts their finger.
  • Mouse events are not dispatched if the user initially touches the screen with more than one finger.
  • Touch events are not dispatched to textfields and textareas. Only mouse events are dispatched.

Ok, so getting back to the larger picture, vendors with touch-based devices and WebKit-based browsers have decided to adopt Safari’s touch events. The problem is now each vendor has to implement the event code to drive the touch events. It was explained to me by a device vendor that every hardware device and OS has its own unique implementation and API for dispatching events and that this leads to some interesting differences in browser behavior and event implementations. After playing with several iOS, Android and BlackBerry devices, I have seen first hand that this is indeed true. Some examples off the top of my head include:

  • BlackBerry dispatches interleaved touch and mouse events real-time while Android and iOS dispatch single mouse events after the user lifts their finger.
  • Some devices dispatch scroll events in a somewhat real-time manor, while others only dispatch a single event after the user lifts their finger.
  • Android devices require preventDefault on touchstart to prevent screen scrolling, while other devices require a preventDefault on touchmove, but this causes form elements to break because you can no longer click on them.
  • iOS dispatches a touchend event when the screen scrolls, but some platforms just stop dispatching touch events while the screen scrolls.

Some of these differences are bugs, or temporary problems due to current implementation, but the fact remains that the devices with these problems may exist and be used for a long time since vendors decide if and when these devices can be updated with fixes. Hopefully things will get better as standards emerge.

Another complicating factor is that some devices have both a touch-screen and a nub/joystick/track-ball. For jQuery Mobile, we need to support both touch and mouse events within all our components. We can’t just rely on mouse events because they don’t provide the real-time feedback/response that is necessary to make things feel snappy when the user is touching the screen. But supporting both is a big headache because it complicates event handling. For example, we need to set up a component to listen for both touch and mouse events, but then we need to disable mouse event handlers if touch events are used so that handlers/actions are only triggered once. We then need to re-enable the mouse handlers when the touch events are all done, but sometimes “done” is hard to figure out due to the fact that sometimes touch events just stop coming because the screen just scrolled.

Over the next few weeks we’ll be blogging about some of the ways we are dealing with these challenges while trying to reduce the event code complexity for jQuery Mobile components and implementing features like faux momentum scrolling. Stay tuned!


Viewing all articles
Browse latest Browse all 10

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>