iOS layouts for web developers #5 - events handling
Time to finish the iOS layouts for web developers series with the post about events. Earlier in the series you might read about the controls, control positioning, managing the appearance and CSS properties replacements.
Touchy state of the mobile touch events
Both in the web and in iOS we employ event-based models to define and control the interactions between our application and the external world, especially the user. The general idea of listening and reacting to particular events on specific parts of the UI or the whole application is the same on both environments. But the set of events is distinct and the main difference we need to always be aware is there are different ways to interact with the classical web than with the mobile device.
The web's basic DOM events model was designed when touch interfaces were still very far from mainstream devices. It assumes the application is controlled by the mouse and the keyboard, thus the events carry the information like where the mouse pointer is or which key was pressed. And now, when we use the web on the mobile device, we’re still constrained to what type of interaction the browser supports and in a lot of cases we still need to talk in terms of clicks and mouse moves instead of gestures.
The story about proper touch events in the web is long and convoluted. First, Safari introduced own non-standard touch events support back in iOS 2.0. The de-facto standard was adopted by few other mobile browsers and later codified as the W3C standard, but its adoption is still quite poor. Meanwhile the new, more generic alternative concept of Pointer Events was coined, specified and introduced in Internet Explorer. This led to the support fragmentation and uncertainty for developers what to rely on and what to expect in the future.
On the other hand we have native mobile platforms like iOS that know much better how the contemporary device is controlled. Instead of mouse and keyboard events, the mobile platform is concerned about multitouch gestures like panning or pinching or accelerometer-based motion recognition and exposes it via the high-level gesture API instead of low-level information about each and every touch or finger movement.
Of course, on a conceptual level, there are some analogies and some of the most common native gestures can be translated quite directly into web world - the example is a scroll gesture, done by tapping the screen and dragging the app’s content - mobile browsers happen to emit scroll
events then.
Simplified gestures handling
Events in iOS are approached at multiple levels of abstraction, giving the developer the way to either easily use the system default gestures or alternatively dive down to more complex but more powerful API.
The simpler layer available are gesture recognizers. Views can have multiple gesture recognizers attached, each configured to detect a particular, possibly complex gesture like pinching (UIPinchGestureRecognizer
) or finger rotation (UIRotationGestureRecognizer
). When the gesture is detected, the recognizer calls its target - the callback we defined when setting the recognizer up:
- (void)viewDidLoad {
[super viewDidLoad];
UITapGestureRecognizer *doubleTapRecognizer = [[UITapGestureRecognizer alloc]
initWithTarget:self action:@selector(respondToDoubleTapGesture:)];
doubleTapRecognizer.numberOfTapsRequired = 2;
[self.view addGestureRecognizer:doubleTapRecognizer];
}
- (void)respondToDoubleTapGesture:(id)sender {
// react on double tap event here
}
There’s no direct analogy available for complex gestures in the web world. Mobile browser apps often have their own handling for such a gestures, not even passing the raw gesture to the web app currently opened. For example, when pinching the website on the mobile device, it is zoomed in or out without the web app being notified about that event directly. The solution to handle complex gestures in mobile web is to defer to 3rd-party libraries that employ the low-level handling mechanisms to emulate the gesture events. Examples are Hammer.js or Touchy.
3rd-party libraries to handle complex events <—> built-in gesture recognizers
Simple events simplified even more
For simpler events that do not require complex “recognizing” of movements pattern and consist just a single interaction, like tap, there is simplified mechanism available in iOS via UIControl
's addTarget:action:forControlEvents:
method, where we specify callbacks for particular events directly in the control:
- (void)viewDidLoad {
[super viewDidLoad];
UIButton *button = // create or find a button
[button addTarget:self
action:@selector(respondToTapGesture:)
forControlEvents:UIControlEventTouchUpInside];
}
- (void)respondToTapGesture:(id)sender {
// react on tap event here
}
The web analogy here is pretty clear - it’s like adding an event listener to the DOM element, either directly using element.addEventListener
API or through convenient libraries like jQuery and its on
function etc.
addEventListener
and its wrappers <—>addTarget:action:forControlEvents:
What else we need to know here is the “translation table” from the DOM event to its corresponding UIControlEvent
type. Of course the analogies are coarse-grained as always, but here we go:
mousedown
event <—>UIControlEventTouchDown
mouseup
event, conventionally alsoclick
event <—>UIControlEventTouchUpInside
change
event on form controls <—>UIControlEventValueChanged
drag events <—>
UIControlEventTouchDragInside
,UIControlEventTouchDragOutside
,UIControlEventTouchDragEnter
,UIControlEventTouchDragExit
Low-level events handling
There is also lower level touch event handling available within UIResponder
base class, which fortunately is a base class for UIView
allowing these methods to be used everywhere. There are plenty of methods available to be overriden when our view implementation is interested in being notified about events like beginning of the touch (touchesBegan:withEvent:
), touch move (touchesMoved:withEvent:
) or touch end (touchesEnded:withEvent:
). These events consists of locations for all touch points, allowing multitouch support. But in order to detect any pattern, we need to analyse the data on our own. This resembles what we have available in some web browsers.
listening on
touchstart
event <—> overridingtouchesBegan:withEvent:
method
listening on
touchmove
event <—> overridingtouchesMoved:withEvent:
method
listening on
touchend
event <—> overridingtouchesEnded:withEvent:
method
listening on
touchcancel
event <—> overridingtouchesCancelled:withEvent:
method
Raw motion events handling is also available within UIResponder
in the same fashion as touch events, using motionBegan:withEvent:
method and its siblings. In the web we might use device motion and orientation events instead.
listening on
devicemotion
event <—> overridingmotionBegan:withEvent:
andmotionEnded:withEvent:
methods
Other events
For the sake of completeness, I’d mention few more event types that are somehow supported by both the web standard and iOS.
listening on
window.onload
, jQuery$(document).ready
or similar events <—> overridingUIViewController.viewDidLoad
method
listening on
window.resize
event <—> overridingtraitCollectionDidChange:
method on the view or view controller
listening on
deviceorientation
event <—> adding observer toNSNotificationCenter
that observes forUIDeviceOrientationDidChangeNotification
; alternatively overridingtraitCollectionDidChange:
method on the view or view controller
set of proposed solutions for sensors and hardware integration, not yet implemented <—> remote control events on
UIResponder
Who handles the event? language: en
Not only the types of events should be of our interest, but also which element/control is responsible for handling the event. We have some differences in that matter between the web and iOS.
In the web, all the interaction events are first dispatched directly to the element that user interacted with, for example the mousedown
event is first delivered to the innermost DOM element user clicked with her mouse. By default, all the listeners registered for that element are fired and the event is passed up the DOM hierarchy (this is called event bubbling). Every listener has the opportunity to stop the bubbling (stopPropagation
) and/or cancel other listeners, but by default events traverse up to the top of the view hierarchy (to the document
element), regardless if the event was somehow handled or not.
In iOS, the way the event travels up the view hierarchy is similar - it is first delivered to the control down the tree and then passed up, but only until some control actually handles it - the traversal stops then.
calling DOM event
stopPropagation
on event handling <—> on iOS, propagation is stopped automatically when the event is handled
Also, it is not fully accurate to say the events go up the view hierarchy. It actually go according to the responder chain, which might or might not be equal to the view hierarchy order of controls. For touch events, unless we modify the nextResponder
property, it is the same. But we might want to manage which control is the next responder on our own, for example to implement nice keyboard traversal through the text inputs.
DOM events bubble up the DOM hierarchy <—> iOS events traverse according to the responder chain
But there are few more quirks. First one is the way the iOS determines which control was touched - it starts from the uppermost window view and performs a hit-testing on its subviews recursively. That takes into consideration only the views that are actually visible within the view’s bounds - so the views that are drawn outside its superview using clipToBounds = NO
can’t handle the touch events properly. The workaround is to override the hit-testing method, but this gets hairy pretty quickly.
DOM events are delivered to the innermost element according to view hierarchy only <—> iOS touch events are delivered to the innermost control according both to view hierarchy and position within bounds
One more important trick that is often used to modify the default touch target control on iOS is to disable the userInteractionEnabled
flag for the control to prevent it being considered for hit-testing. In that case, the control that gets the event might not actually be the innermost one, but the last of its ancestors that doesn’t have interaction disabled. To achieve something a bit similar in the web, we can set CSS pointer-events: none
on the element we want to “disable", although this is rather rough analogy.
pointer-events: none
in CSS <—>UIView
’suserInteractionEnabled
flag
That’s all, folks. I hope anyone finds this series worth reading. I’d be grateful for any corrections, additions or just comments.