Among the many announcements made this week at Apple’s 2018 WWDC Keynote, we saw the unveiling of iOS 12. Apple's event Google’s event last month in showcasing their newest OS for mobile devices, and there’s certainly some parallels to draw in aspects such as the privacy or “state of mind features”. But more importantly, iOS 12 also promises to bring some significant improvements towards perceived performance of iPhones as well as enhancements to core OS functionalities such as notifications.
iOS 12 to be Available for the iPhone 5s – 6 Years of Device Updates
One of the biggest revelations about iOS 12 is the fact that Apple is going to bring it to all devices that can currently run iOS 11. This means that Apple's oldest generations of products, which are based around the A7 SoC like the iPhone 5s and iPad Air (1), will continue to receive support for another year, rather than being tired after 5 years. As per Apple's typical schedules then then, this means that these devices will be supported through September of 2019, marking an incredible 6 years of both feature update and security update support.
This is an astounding feat which puts the Android ecosystem to shame – and Apple wasn’t pulling punches when bringing up the fact that iOS 11 currently has a 81% installed base versus Android O’s 6%. Apple’s decision to extend iOS 12 this far back to past devices mean it will have the widest range of device support of any version to date.
Increased Device Performance & Responsiveness
Among one of the largest announcements was that Apple was focusing on performance of iOS 12 and particularly on application launch times.
Citing an iPhone 6 Plus as an example device, Apple is touting that they're going to be able to improve application launch times up to 40%, with particular examples such as a 50% faster keyboard bringup or a 70% faster camera launch.
Apple explained that they’ve achieved this through more aggressively tuning the CPU DVFS to ramp up quicker in frequency and through the introduction of what essentially was described as a touch booster mechanism.
I was extremely surprised to talk about this and announce this as I wasn’t aware iOS was lacking touch boosting mechanisms. For context a touch booster is simply a mechanisms which waits on either software or hardware events from the OS or the touchscreen to give hints to the DVFS mechanisms to immediately request higher performance states (frequencies). Apple lacking such a mechanism in the past means that CPU frequency selection was merely a closed-loop kind of system with no knowledge of user context.
iOS 12 now introducing a touch booster means the OS will give hints to the hardware. For example when launching an application, it will immediately ramp up the CPU speeds and thus resulting in the advertised performance improvements.
I’ve not been a great fan of iOS responsiveness due to the fact that I find the animations far too slow – on Android devices I outright disable animations. Longer animations are also a way to hide performance hiccups – and I do wonder (and hope) now that Apple has vastly improved on this aspect in iOS 12, if they’ll be able to provide ways to reduce animation durations, or also offer up a way to get rid of them.
Group FaceTime to up to 32 People
Another large announcements was the introduction of group FaceTime. The practical explanation is extremely straightforward – everything you’ve been able to do in FaceTime till now, you can do with several people at once. The new UI will show you 4 of the most prominent group users currently talking with a dynamic emphasis on the current speaker by increasing the video frame size.
What was most astounding about this new feature is the technical requirements for it. Having up to 32 group members means that you essentially have 32 video streams – even if the implementation might somehow only ever have 4-8 actual real full playback streams. HEVC along with its variable block size quadtree compression algorithm would certainly allow to share a single stream among all group members on the video playback side with varying block quality of the thumbnails depending on who’s in focus, however this wouldn’t work on older iPhones so it might not be a codec dependent solution at all.
The return/playback stream device would then composition the different block segments into different layers in the app UI, but this would mean there’d be a need for a central entity receiving all the recorded streams and recombine/reencode them to resend them to all members. The latter assumption would make sense anyway given the upload bandwidth requirements in a 32-way group video would be enormous.
No matter how Apple achieved this – it definitely looks like a large technical challenge and definitely a first.
Grouped Notifications and Notification Management
iOS 12 notification management now will be able to sort through notifications based on topic and group them together in order to reclaim notification area real-estate. Repeated notifications from an app will thus be grouped together in case there’s more notifications and will allow the OS to show notifications from other apps as well – instead of more recent notifications from a single app using all the available space and pushing out the older notifications.
The new notification management will now allow the user to customise how an app’s notification will be delivered – allowing a choice between the classic behaviour, or a “quiet” behaviour where the notification will not appear on the lock screen, present banners or make any sound. It’s also possible to completely turn off notification of an app.
“Digital wellbeing” – Apple’s take
When hearing about iOS 12’s new screen time management features, one couldn’t help but being reminded of Google’s focus on its own “digital wellbeing” features announced in Android P.
iOS 12’s implementations couldn’t be more similar when it comes to features such as the per-app new screen time very much mimic what we’ve seen from Google. The new OS provides in-depth usage statistics on a per-app basis, and also allows the user to manage this by setting up imposed limitations. Much like Android P – if the user then exceeds this limitation then he’s prompted with a warning. Of course you can choose to ignore this warning and continue using the app – but the mechanism is meant as a deterrent and reminder to the user. What iOS 12 can also do though is use the same mechanisms in the context of parental control where a device’s functionality can be outright limited when it exceeds its allocated time.
Another new feature is a more dynamic “do not disturb” mode which can be enabled just in the timeframe of a meeting that’s in the calendar, or more interestingly and novel, until one leaves the current location.
The Memoji – The emoji wars continue
Beyond the new and exciting animoji’s available in iOS 12 we see now Apple taking a note from Samsung’s AR Emoji’s into introducing fully customisable avatars: Memoji's. The big difference here is that Apple’s implementation and resulting characters are a lot more cartoony than Samsung’s – which can be both a positive and negative; positive as it doesn’t have the same uncanny valley problem as AR emoji’s, but negative as it also is far less expressive and with less likeness of oneself.
Siri Takes Shortcuts to Learn Your Habits
The subject of Siri has as of late been a contentious one in the Apple ecosystem. While Apple had an early lead in the field when they launched Siri, they’ve arguably not pushed forward on development fast enough, allowing both Google and Amazon to surpass them in the natural language assistant space in a lot of ways. This isn’t a pile of technical debt Apple will dig themselves out of in a single year, but all eyes have been on how they’re going to go about catching back up with the pack.
The first step then is a new feature for iOS 12 that Apple is dubbing Shortcuts. Shortcuts is a means for third-party apps to advertise features and other data to Siri, including offering a means to suggest to Siri shortcuts that may be of relevance to the user. The end result is that Siri can learn what shortcuts a user accesses and suggest them when conditions are right – for example, if the phone is in a certain location – or users can take things one step further and create their own shortcuts with the aptly-named Shortcuts app.
It’s really the latter aspect that makes shortcuts powerful. Siri still isn’t to a point where it’s all-knowing, but this gives users the most direct way to date of teaching Siri new tricks. Shortcuts can be assigned to a voice command, so to take a page from Apple’s examples, “heading home” could instruct Siri to access CarPlay (set radio), HomeKit (turn on house thermostat), Messages (send notification), Maps (plan a route), and more.
Apple’s example is entirely based on built-in applications, however shortcuts is fully extensible to third-party applications. Developers do need to expose features via the shortcuts API, but once they do so then users can integrate them into shortcuts workflows.
ARKit 2 - Apple Gives Everyone Group Hallucinations
One of Apple’s big wins from a technology perspective has been ARKit, their Augmented Reality framework. While their hardware lacks the cameras required for true rear 3D depth perception, they’ve done a very good job of making up for it with a single camera. And now they’re expanding their capabilities further with ARKit 2.
The marquee feature of the latest version of ARKit is what Apple is calling Shared Experiences, which is the ability for ARKit clients to share a common world, with each client able to see it from its own perspective. This essentially turns ARKit into a multi-user/multi-player experience, and is a logical step forward given where ARKit has succeeded thus far. Apple’s demo of choice was a LEGO game, and certainly I’d expect to see games being the most popular use case right now.
Meanwhile, ARKit is also getting 3D object detection capabilities. Apple somewhat glossed over this in the keynote itself, but according to their developer notes, it appears that developers will be able to train their apps to recognize specific objects. Which can then be used to trigger further AR content.
Apple is also rolling out an ARKit 2-powered application of their own: Measure. Which true to its name, allows measuring the size of objects and distances. Given that Apple is doing this without true depth perception I’m definitely interested in seeing how well this will work; it will be a feather in their cap if they can make it work reliably off of a single camera.
Finally, Apple is even creating a new file format for 3D models to be used with AR and ARKit 2, which is being named USDZ. Fundamentally, Apple is pitching this as a means to easily share content between users. But they’re also bringing third parties like Adobe on-board in order to get editing support for USDZ files integrated into their respective tools.
And More to Come
Overall, iOS 12 marks a modest improvement in iOS in terms of features. But judging from Apple’s focus on performance, it will offer a much-needed lull to focus on software quality. iOS 11 was a very rocky launch for Apple, with many bugs that required several patches along with features that weren't ready for shipping until many months down the line. Apple of course survived it relatively unscathed, but this development schedule certainly hints that they don’t want to have an iOS 11-style launch again. Which would be of benefit to everyone.
In the meantime, this week’s keynote just scratches the surface of what iOS 12 is set to offer in some ways. Apple’s making a bunch of security and privacy updates, for example, most of which they’ve barely touched and are quite likely still under very active development. So it will be interesting to see where things stand in September, and what other little quality of life improvements Apple may be able to squeeze in by then.
The iOS 12 beta is available to developers now, and a public beta will be occurring later this summer. The release version of the OS will then be available in the fall.
from AnandTech https://ift.tt/2Jfzgak
via IFTTT
No comments:
Post a Comment