id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
713 | 2,019 | "Facebook’s Encryption Makes it Harder to Detect Child Abuse | WIRED" | "https://www.wired.com/story/facebooks-encryption-makes-it-harder-to-detect-child-abuse" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Hany Farid Business Facebook’s Encryption Makes it Harder to Detect Child Abuse Zuckerberg has repeatedly expressed his desire to “get it right” this time. The technology exists to get it right.
Photograph: Andrew Harrer/Bloomberg/Getty Images Save this story Save Save this story Save In 2018, the National Center for Missing and Exploited Children received more than 18 million reports to their CyberTipline, constituting 45 million images depicting child sexual abuse. Most of these children were under the age of 12, and some were as young as a few months old.
Since its inception in 1998, the CyberTipline has received a total of 55 million such reports. Those from 2018 alone constitute a nearly half of all reports over the past two decades.
Hany Farid is a professor at UC Berkeley, with a joint appointment in electrical engineering and computer science and the School of Information. He was part of the team, in collaboration with Microsoft, that developed PhotoDNA.
These staggering numbers don’t cover the entirety of online services. Most of NCMEC’s reports are automatically generated by an image hashing technology I helped develop called PhotoDNA , which extracts a distinct signature from uploaded images and compares it against the signatures of known harmful or illegal content. Flagged content can then be instantaneously removed and reported.
But not every online service uses PhotoDNA. And child sexual abuse material shared via the dark web, personal correspondences, and services that use end-to-end encryption generally don’t get reported to NCMEC or anyone else. Frustratingly, Facebook, the world’s largest social network, is set to grow the digital realm where images of child sexual abuse can spread freely.
Earlier this year, Facebook CEO Mark Zuckerberg announced that his company is expanding the use of end-to-end encryption on its services, preventing Facebook or anyone else from seeing the contents of communications. Zuckerberg conceded that this comes at a cost. “Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things,” he said. “When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion.” Broader adoption of end-to-end encryption would cripple the efficacy of programs like PhotoDNA, significantly increasing the risk and harm to children around the world. It would also make it much harder to counter other illegal and dangerous activities on Facebook's services. This move also doesn’t provide users with as much privacy as Zuckerberg suggests. Even without the ability to read the contents of your messages, Facebook will still know with whom you are communicating, from where you are communicating, and a trove of information about your other online activities. This is a far cry from real privacy.
Knowing that tens of millions of examples of the most heartbreaking imagery pass through its services every year, why would Facebook undermine the ability to prevent itself from becoming a safe haven for child predators? The not so cynical answer is that Facebook is leveraging the backlash from its recent privacy scandals to launch a strategy that provides plausible deniability against the equally loud accusations that the company is not doing enough to suppress child abuse material, terrorist propaganda, crime, or dangerous conspiracies. By encrypting the content moving through, Facebook gets a twofer: It can claim to be ignorant of the abuse, while also telling the public that it cares about privacy. But neither one is true.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Many in law enforcement have argued that shifting to end-to-end encryption would severely hamper law enforcement and national security. The US attorney general, his British and Australian counterparts, and the 28 European Union member states have all urged Zuckerberg to delay the implementation of end-to-end encryption until proper safeguards can be put in place.
Facebook's move has reawakened the fraught debate over whether governments should have a way to pierce encryption.
I argue that governments that operate under the rule of law should, with a warrant, be granted the same access to our electronic lives as they are our physical lives. Government overreach or abuse can be adjudicated by the courts, and Facebook can choose not to deploy its services in countries in which governments cannot be trusted.
We should continue to debate how to balance the incremental privacy afforded by end-to-end encryption and the cost to our safety. But even now, Facebook can protect our children at the same time as widening its use of encryption.
Recent advances in encryption and hashing mean that technologies like PhotoDNA can operate within a service with end-to-end encryption. Certain types of encryption algorithms, known as partially or fully homomorphic, can perform image hashing on encrypted data. This means that images in encrypted messages can be checked against known harmful material without Facebook or anyone else being able to decrypt the image. This analysis provides no information about an image’s contents, preserving privacy, unless it is a known image of child sexual abuse.
Another option is to implement image hashing at the point of transmission, inside the Facebook apps on users’ phones—as opposed to doing it after uploading to the company’s servers. This way the signature would be extracted before the image is encrypted, and then transmitted alongside the encrypted message. This would also allow a service provider like Facebook to screen for known images of abuse without fully revealing the content of the encrypted message. Facebook would be wise to adopt either of these options.
We do not need to cripple our ability to remove some of the most harmful and heinous content in the name of an incremental amount of privacy. Zuckerberg has repeatedly expressed his desire to “get it right” this time. The technology exists to get it right. Facebook needs to now do what its leaders and everyone else know is the right thing: protect our children.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here.
Submit an op-ed at [email protected].
Neil Young’s adventures on the hi-res frontier The untold story of Olympic Destroyer, the most deceptive hack in history The delicate ethics of using facial recognition in schools Massive, AI-powered robots are 3D-printing entire rockets USB-C has finally come into its own 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Op-ed contributor Reece Rogers Will Knight Will Knight David Gilbert Vittoria Elliott Christopher Beam David Gilbert Amanda Hoover Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
714 | 2,020 | "How to Keep Your Zoom Chats Private and Secure | WIRED" | "https://www.wired.com/story/keep-zoom-chats-private-secure" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons David Nield Security How to Keep Your Zoom Chats Private and Secure Illustration: Sam Whitney Save this story Save Save this story Save With so many people stuck inside, Zoom has become the default video chat platform for millions. Its simple, accessible interface makes keeping in touch with family, friends, and coworkers a cinch. At the same time, many have found Zoom’s default privacy and security features lacking , exposing users to trolls and unwanted oversight. If you're using Zoom, here’s how to stay safe and protected.
First keep in mind that Zoom’s security is fine for most people. If your meetings are more sensitive, though, you should know that the platform’s claims of end-to-end encryption don’t really hold up , and critics have found the type of encryption it does implement lacking in some ways. We have some suggestions for other platforms that have more robust encryption in place below.
For privacy and trolling concerns, though, there are plenty of settings you can tweak to make Zoom a safer place for you and everyone else on the line.
Every Zoom meeting is based around a 9-digit meeting ID. If that ID becomes public somehow, or trolls find it in a web search or guess it, they can pop into your chats and disrupt them. That's obviously a problem, and an increasingly common occurrence.
You've got a few ways to guard against this. First and most obviously, be careful who you share the meeting ID with; posting it on your public Twitter feed isn't the best idea. Bear in mind that contacts you've added in Zoom will be able to see your Personal Meeting ID, and so will know how to find any meetings you launch with it.
When you launch or schedule a meeting, the options panel lets you generate a random ID for the meeting rather than using your personal one. Using a random ID is another way to avoid trolls, though if you've got an office team who always meet with the same ID, you might not consider the extra inconvenience worth it.
To absolutely lock down a meeting, make sure participants need a password to access it. Again, this can be found in the options pane when you create or schedule a meeting. Of course, be careful how you share the password and who you share it with.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Finally, if you look under the advanced options for hosting meetings, you'll see an Enable Waiting Room option. People are put on hold here before you give them specific approval to join, and it can help to block out anyone you weren't expecting. All these options can be set on a meeting-by-meeting basis, or configured as defaults by going to your Zoom settings on the web.
Even with those precautions in place, you're still not completely protected against unwanted guests, or indeed from bad behavior by the guests that you have invited to your video chat. As a host, you've got a few handy options for limiting what other users can do.
For starters, you can restrict screen sharing: If you go to your Zoom settings on the web and click In Meeting (Basic) , you'll see a Screen sharing option to stop anyone except you from sharing the desktops or apps on their computer. You can still grant screen sharing privileges to specific users in a meeting later, if you need to.
The same option is available after you've launched a meeting on Windows or macOS. Click the small arrow next to Share Screen , then Advanced Sharing Options , and you can ensure that only you can bring up videos, images, or anything else from your computer or phone.
Another step you can take is to lock a meeting once you're sure that everyone who needs to join has joined. From the desktop app, click Manage Participants , More , and then Lock Meeting.
Just make doubly sure that you weren't expecting someone who hasn't yet arrived, as they won't be able to get in.
Add all of these measures up together and you can be very confident that your next Zoom meeting isn't about to get rudely interrupted. Be careful not to get complacent though, particularly when it comes to limiting the exposure of the meeting IDs and the passwords that you're using for your video calls.
So you're safe and protected from outsiders; all that's left is an awareness of what your boss can peek at while you're using Zoom as a meeting participant. Meeting hosts have a lot of privileges and tools at their disposal, which you should know about going in.
Zoom had an attention-tracking feature, for instance, that told hosts if participants clicked away from the Zoom app for more than 30 seconds. After a public backlash, Zoom deactivated the feature last week.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Also remember that hosts can record audio and video from meetings in full, as well as keep a record of public chats. What's more, if you save the chat log for yourself, it will also include private chats you've been involved in, so be very careful about sharing that file with anyone else. Don't just post it in the group chat for everyone to read. If a host chooses to enable this setting, Zoom will notify you and give you a chance to opt out.
There's not a lot you can do about these features, which are designed to make it easier to create logs for people to look back on later, but it's worth knowing about them. A simple rule of thumb: If there's a communication you don't want anyone else to know about, keep it off Zoom.
If you're not happy with Zoom, then you've got plenty of other options to turn to. For example, Google Duo : it recently updated the maximum video chat group size from 8 to 12, it's available on mobile devices and the web, and video and audio calls are end-to-end encrypted (not even Google can peek at the data).
For those of you with colleagues, family, and friends who are all on Apple devices, FaceTime is an option. Group video chats of up to 32 people are supported, end-to-end encryption is turned on by default, and the apps are simple to use across iOS, iPadOS, and macOS. The downside is, of course, that no one on Windows or Android can join in.
Webex from Cisco is another group video calling tool that supports end-to-end encryption: It's a little business-focused, but you do get support for video calls of up to 100 people, and a lot of the same features that Zoom brings to the table. The free tier is quite generous at the moment, though we'll have to wait and see if it remains so after the current global pandemic has passed.
Like Webex, GoToMeeting has been in the virtual meeting business a long time, and includes end-to-end encryption as standard. Unlike Webex, there are no free plans, so you or your company will have to pay $12 a month and up for video calls with up to 150 different people. There’s also a 14-day free trial.
If you can live without full end-to-end encryption—so you're essentially putting your trust in the software developer not to gather any more data than it needs to—then programs such as Skype (up to 50 people on a video call), Slack (up to 15 people on a video call with a paid plan), and Facebook Messenger (up to 50 people on a video call) are all options as well.
Correction 4/5/20 12 pm ET: This story previously stated that Zoom hosts could use an attention-tracking feature that the company had disabled last week.
Disney+ should offer the Star Wars original cuts —all of them The freewheeling, copyright-infringing world of custom-printed tees The race to turn hotels into coronavirus hospitals Animal Crossing: New Horizons is the game we all need right now You can see the coronavirus from space 👁 Why can't AI grasp cause and effect ? Plus: Get the latest AI news 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Contributor X Topics how-to software Video Zoom Reece Rogers Kate O'Flaherty Reece Rogers David Gilbert David Gilbert K.G. Orphanides Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
715 | 2,020 | "What's New in iOS 14 (and iPadOS 14): Our Full Feature Rundown | WIRED" | "https://www.wired.com/story/apple-iphone-ios-14-new-features" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Julian Chokkattu Gear Apple's iOS and iPadOS 14 Have Dropped. Here's What’s New For years, the iPhone home screen has been a grid of app icons that go on for pages and pages. That's beginning to change.
Photograph: Apple Save this story Save Save this story Save The iPhone's software is getting a face-lift. The latest version of Apple's mobile operating system, iOS 14, is now available for download, and you'll notice several visual tweaks when you first install it. Notably, your home screen looks very different, with an app library, widgets everywhere, and a new look for Siri. We've collected all the top upgrades you'll find in iOS 14, along with some small changes, to help you make sense of it all.
All of these features are also available in iPadOS 14, the iPad's operating system, which you can also install now. If you're interested in all the new hardware Apple recently announced, check out this roundup.
But first, you might be wondering how you'll be able to install them. Anyone with an iPhone 6S or newer (that includes the 2016 iPhone SE) can download iOS 14 right now. For the tablets, you'll need an iPad Air 2 or newer, an iPad Mini 4 or newer, an iPad 5th generation or newer. All iPad Pro models can install iPadOS 14 now, too.
Now, before you install anything, make sure to back up your device. (We have a guide that can help !) Once you've done that, the rest is very simple. Open the Settings app, tap General , and then Software Update.
Your device will search for an update and will then start downloading it. It will take a few minutes and will automatically restart, so make sure you initiate this when you aren't doing anything important.
As a word of advice, the first version of new Apple updates can still have some bugs. The safest bet is to wait a day or two to see if there are reports about any major issues. If not, you can rest easy installing it. Now, onto what's new.
For years, the iPhone home screen has been a grid of app icons that go on for pages and pages. That's changed now. In iOS 14, you can hide pages of apps you don't use often, and a scroll to the right will let you access your new App Library. It's quite similar to the app drawer on Android phones , but instead of more icons in an endless vertical stream, apps are grouped into various categories like Social, Productivity, and Entertainment.
The top two categories (which look like big folders) are Suggested and Recent Apps. Suggested Apps uses machine learning to recommend apps you might want to use next, and Recent Apps shows apps you recently used or installed. There's also a search bar at the top.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Photograph: Apple Until now, the iPhone's widgets have been relegated to the Today View on the left of the main screen. Now, you can pull these widgets out and into your home screen (just like on Android) and get alternate sizes for them (you can't pull widgets out of Today View on iPadOS). This allows you to customize how your phone looks and quickly access certain functions, like switching music tracks with your music app's widget. To see all the widgets available with the apps you have installed, there's a Widget Library. Just be aware that developers may not have widgets ready yet (or no plans to make one) for your apps.
One particular widget from Apple is Smart Stack, which bundles together a variety of widgets into one oblong-shaped box. You can swipe through this to see the others, or Smart Stack will automatically change the widget based on time of day and your usual activity. For example, in the morning, Smart Stack might show you a morning news briefing. In the afternoon, it might switch to your calendar widget, and in the evening, it might show your fitness activity summary.
Video: Apple If you're watching a movie on your iPhone but need to switch to a messaging app to respond to someone, Apple's new Picture-in-Picture mode means you don't need to hit the pause button. Instead, you'll see a floating screen over your home screen (or any other app). You can resize it, drag it around, and control video playback. You can even minimize it to the side of the screen but still have audio playing if you need your iPhone's full screen for something else.
A new version of Siri won't take up your whole screen when you just want to ask a question. Instead, Siri now looks like a small bubble at the bottom. Ask it for the weather and you'll see a pop-up notification at the top of the screen with the answer. It's a little smarter too. It can access information from across the web (to some degree) and can also now send audio messages for you in the Messages app.
Photograph: Apple Apple's moving in on Google with its new Translate app. At the moment, it supports 11 languages, and an on-device mode keeps text and voice translations private. If you turn your iPhone into landscape view, the app will turn on Conversation mode, which offers a side-by-side view that makes it easy for both parties to see the translation.
Photograph: Apple Your Messages app is getting a slew of updates. First, you can pin important conversations to the very top of the app. These will appear as big circles, different from the other threads in the app, and you can pin up to nine threads. For group messages, you'll see circular images of everyone in a group at the top of the screen, and people who have been more active than others will appear slightly bigger (you can also set a group photo).
In group chats, you can reply inline to specific messages and view this as a separate thread. You can also type someone's name to "mention" someone, similar to using the @ function on other messaging apps like Facebook Messenger or Slack. With the latter feature, you can have conversations only send a notification if you have been mentioned.
There are new Memoji designs to choose from, including 20 new hair and headwear styles, more face coverings, and age options. There are three new Memoji stickers too: a hug, a fist bump, and a blush.
The redesigned Apple Maps that Apple introduced last year is available in three new countries: the UK, Ireland, and Canada. Apple says it's also working with trusted brands to integrate travel guides into Apple Maps, which include recommendations for places around you. Perhaps even more helpful, Maps can now tell you when you are approaching a speed sensor or red-light camera.
Cycling navigation is also available in Maps. It will take into account elevation, so you'll know if you'll be dealing with a lot of hills. Unfortunately, it's only available in New York, Los Angeles, the San Francisco Bay Area, Shanghai, and Beijing to start. More cities are on the way in the coming months. You can ask Siri for cycling directions.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft If you have an electric car, you'll be happy to learn that Apple has added EV routing into Maps. It takes into account temperature, weather, elevation, and other information to automatically add charging stations to your route if you'll need to juice up soon. Apple says it's working on deep integration with car manufacturers like BMW and Ford, so it will know exactly which stations will support your car.
You will soon be able to tap your phone to the door of a car to unlock it via NFC technology, just like paying with Apple Pay. If you lose your iPhone, you can turn off keys remotely via iCloud. You can even "share" your car key via iMessage and set restricted driver profiles, which can limit things like acceleration, top speed, and more. The first car to support this feature will be the 2021 BMW 5 Series, and it will likely take a number of years for a good portion of vehicles to support it.
Photograph: Apple Apple wants to make it easier for you to find and use new apps based on what you are doing and where you are. This comes in the form of App Clips, which are bite-sized versions (10-megabytes or less) of apps that you can use for one-off instances. For example, if you're browsing Panera's menu in Safari or looking up the closest restaurants near you in Maps, an App Clip might pop up from the bottom of your screen. It's a lightweight version of the Panera app you can use to check the menu and place an order for pick up. It relies on Apple Pay and Apple's sign-in instead of requiring you to make a Panera account if you don't have one.
Another example is using an App Clip to pay for a parking meter or rent a scooter. These App Clips can be found by tappable NFC tags or QR codes around you. If you need to find an App Clip again, you can see it in the new App Library, so you can download the full app later if you want. It's very similar to Android Instant Apps, which Google introduced a few years ago.
If you have an Apple Pencil, you're now able to write with it in any text field, like a search bar, and the iPad will convert your handwriting into text. It means you don't need to rely on the virtual keyboard as much when you're not using a physical keyboard.
What's also nice is you can select your handwriting using a Smart Selection tool, and if you paste it into an app that doesn't support handwriting, the iPad will automatically transcribe it into text. There's also a Shape Recognition tool, which will perfect your sloppily-drawn shapes. It's handy if you want to keep things neat or if you're making diagrams.
Those are some of the major iOS 14 and iPadOS 14 upgrades. Here are some smaller tidbits. If you want to read every single update, check out Apple's iOS 14 preview website and the one for iPadOS 14.
You can change the default email and web browser apps. So you can replace Apple's Mail app with Gmail, for example.
Universal Search's interface will no longer interrupt what you're doing, and you can use it to search for anything—like installed apps or contacts—not to mention complete web searches. You can even search within apps. Similarly, when you get a call, the notification will be a banner at the top instead of hogging the whole screen.
You'll be able to "Sign in With Apple" inside apps by tapping a button to port your existing accounts into your Apple account.
You can search for emojis with the keyboard and the keyboard's dictation feature now uses the same engine as the one used for Siri, meaning your dictations will be more accurate. It's also running on-device, so it works offline.
You'll now see a pop-up notification when an app wants to track you across apps and websites owned by other companies. You can allow it or ask the app not to track you. This means it will reduce the amount of data collected by the app. Similarly, new cards in the App Store will show what kind of data an app might collect before you install it. It's meant to act just like the nutrition label on food packaging. You can also share App Store subscriptions with your whole family.
For camera upgrades, the camera can now shoot photos up to 90 percent faster, at up to four frames per second. QuickTake video is now available on the iPhone XR and XS. And you can quickly toggle the video resolution and frame rate in video mode. If you have an iPhone 11 or 11 Pro, Night mode now offers up a guidance indicator to make sure you stay steady during capture, and you can also cancel a Night mode shot midway instead of waiting until the end. There is also a camera recording indicator in the status bar and you can add captions to photos and videos in the Photos app.
Select Apple apps in iPadOS now feature a sidebar for easier navigation, making better use of the larger screen.
The Health app now lets you add how much sleep you want to get every night. A Wind Down mode prepares your phone for bedtime and wake-up, so you can schedule things like playing soothing sounds. It automatically turns on Do Not Disturb and Sleep mode. The latter will dim your phone screen, show the date, time, and next alarm.
On the privacy front, you can share your approximate location with apps instead of your precise location. The Control Center also shows which apps recently accessed your microphone or camera. And if you connect to a Wi-Fi network that doesn't use a private Wi-Fi address, you'll get a warning.
You can assign reminders to people you share lists with, and they will get a reminder.
📩 Want the latest on tech, science, and more? Sign up for our newsletters ! “Dr. Phosphine” and the possibility of life on Venus Meet this year’s WIRED25: People who are making things better How we’ll know the election wasn’t rigged Dungeons & Dragons TikTok is Gen Z at its most wholesome You have a million tabs open.
Here’s how to manage them 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Reviews Editor X Topics apple ios how-to iPhone software Julian Chokkattu Julian Chokkattu Simon Hill Brenda Stolyar Simon Hill Julian Chokkattu Jaina Grey Boone Ashworth WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
716 | 2,020 | "Antarctic Glaciers Are Growing Unstable Above and Below Water | WIRED" | "https://www.wired.com/story/antarctic-glaciers-are-growing-unstable-above-and-below-water" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Eric Niiler Science Antarctic Glaciers Are Growing Unstable Above and Below Water Photograph: Alex Mazur/International Thwaites Glacier Collaboration Save this story Save Save this story Save For several years, scientists have been worried about the retreat and eventual collapse of Thwaites Glacier, a Florida-sized plug that holds back the West Antarctic ice sheet from the Southern Ocean. If Thwaites goes kaput, the resulting catastrophe could raise global sea levels by more than two feet on its own, or by eight feet in combination with melting from nearby glaciers, according to NASA estimates.
That fear has driven a big push by international teams of researchers to understand what’s going on at Thwaites and nearby Pine Island Glacier. A group of researchers from the United States and the United Kingdom took advantage of an unusually calm period in Antarctica in January 2019 to explore the two glaciers and the ocean nearby with ships, unmanned submarines, and aircraft to find out what’s happening to the ice and how fast. The initial scientific fruits of this expedition, part of a five-year $50 million effort called the International Thwaites Glacier Collaboration , are now being published, and the results are worrying. Researchers operating special ship-mounted sonar gear found a series of 25-mile-wide channels in the seafloor that bring warm water to the base of the Thwaites and Pine Island Glaciers. When this warm flow meets the place where the glaciers rest on top of the edge of the Antarctic continent—known as the grounding line—the ice underneath the glacier melts and the whole glacier becomes more slippery. Think of an ice cube sliding across the counter on its own meltwater.
Marine geophysicist Kelly Hogan of the British Antarctic Survey mapped the seafloor in front of the glaciers. For two months in the winter of 2019, Hogan was part of a joint US-UK expedition to the region, a trip that began at Punta Arenas, Chile. After a five-day crossing to Antarctica on the US research vessel Nathaniel B. Palmer , Hogan arrived at the Thwaites study site and found herself staring at a massive wall of ice. “We approached Thwaites at night,” Hogan recalls. “It was dark and foggy. I went to the bridge to talk to the captain, and as we were talking this 25-meter cliff emerged out of the gloom.” Over the next two months, the scientists traversed the 80-mile wide embayment in front of the glacier in a back-and-forth pattern known as “mowing the lawn.” The researchers used a multibeam echosounder mounted under the ship to collect sonar images of the seafloor that were assembled into a 3D map. Together, they revealed massive seafloor channels moving warm water to the base of the glacier.
“They are important because Thwaites is vulnerable to changing quickly under climate change,” Hogan says. “One of the drivers is warm water getting underneath the floating parts and increasing the melting. The fact we have these big underwater channels going all the way up to the base of the glacier—because they are deeper and larger, you get more of that warm water and would increase the ability to melt.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Photograph: Linda Welzenbach/International Thwaites Glacier Collaboration The US/UK team published two scientific papers last week from the 2019 expedition.
One, authored by Hogan in the journal Cryosphere, detailed the team’s new map of the seafloor at Thwaites using the ship-based sonar readings. The second paper contained new data from another group that flew over the glacier in a Twin Otter aircraft with ground-penetrating radar that was able to look through the ice. The researchers also used special equipment to detect gravitational changes in the glacier that revealed the density of the bedrock below the ice.
The crew flew over both the glacier and the bay where Thwaites meets the ocean, according to David Porter, an associate scientist at Lamont Doherty Earth Observatory at Columbia University and an author on the second study , also published in the journal Cryosphere.
The two teams shared data sets from the aircraft and ship. “We used measurements of gravitational changes to make a new map of the glacier and seafloor shape,” Porter says. “In combination with seafloor bathymetry, the data has revealed the shape of the seafloor and that there are deep pathways to allow warm water to move onshore, across the continental shelf and go in contact with the ice.” These underwater channels are up to 3,280 feet deep and between 12 miles to 25 miles wide, says Porter. “That’s one reason Thwaites has been changing,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Scientists say Thwaites and the other glaciers are not likely to collapse in the next century; they are simply too big to fail right now. At the same time, they are seeing troubling signs of increased melting that can still cause a slow rise in sea levels around the globe. One of the big questions is the speed of the melting, and whether it will reach some kind of tipping point where it will be unable to be reversed if society is able to contain carbon emissions and somehow slow climate change.
Illustration: International Thwaites Glacier Collaboration “The ice shelf is getting weaker,” says Stef Lhermitte, an assistant professor of geoscience and remote sensing at the Technical University of Delft in the Netherlands. “The ice shelf slows down the traffic behind. At the moment you lose the ice shelf, the glaciers are free to flow and discharge their ice into the ocean.” Lhermitte led a separate study of Thwaites and Pine Island Glaciers with a group of Dutch, French, and US scientists, who used a 21-year dataset of satellite imagery to reveal the first signs of structural weakness—crevasses and open fractures in the ice shelves that could signal their disintegration in the future. Their results show that the damage is creating a positive feedback loop that triggers more damage and faster-moving ice flows.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That study, published today in the Proceedings of the National Academy of Sciences , concludes that understanding how the ice field ruptures as it moves across the bedrock is vital to understanding when this collapse might occur. In addition to identifying the weak points in the glacier, Lhermitte and colleagues created a computer model to predict how such cracking and buckling could affect other Antarctic glaciers in the future.
Lhermitte says the goal of this model was not to predict the exact date when Thwaites will collapse. That’s next to impossible right now, because there are too many other unknown factors to consider, such as the pace of climate change that is warming both the air and water temperature around the glaciers, as well as the movement of ocean currents around Antarctica. (A 2014 study published in the journal Science by University of Washington scientists used satellite data and numerical modeling to predict that the West Antarctic Ice Sheet, including Thwaites, may collapse in 200 to 1,000 years.) Instead, Lhermitte’s model is an attempt to incorporate ice sheet damage into similar global climate models that predict both sea level rise and the future of Antarctica’s glaciers. “The understanding of how much and how fast these glaciers are going to change is still unknown,” Lhermitte says. “We don’t know all the process. What we have done with this study is look at this damage, the tearing apart of these ice shelves, and what their potential contribution to sea level rise could be.” Predicting glacier ice movement is difficult because ice behaves as both a solid and as a liquid, says Penn State University professor of geosciences Richard Alley , who was not affiliated with any of these studies. Alley says the study about how glaciers fracture is both new and important because it gives more insight into how fast they might collapse. In an email to WIRED, Alley compared the science of studying how Antarctic glaciers move to the process of engineering a bridge.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “You do NOT want your bridge to break, and you do not want to need to predict exactly the conditions that will make it break, so you design with a large safety margin. We can't ‘design’ Thwaites, so we face these large uncertainties. Quantifying parts of that is important, although remembering that this is still fracture mechanics, and it still might surprise us, one way or the other,” Alley wrote.
Lhermitte thinks his study results mean that Antarctic glaciers need to be closely watched in the coming years for any signs of rapid change that might lead to an environmental catastrophe. “They are these large sleeping giants,” Lhermitte says about Thwaites and Pine Island glaciers. “We start to be curious if they will stay sleeping or awake with large consequences, with sea level rise.” 📩 Want the latest on tech, science, and more? Sign up for our newsletters ! How to escape from an erupting volcano Too many podcasts in your queue? Let us help The furious hunt for the MAGA bomber Your beloved blue jeans are polluting the ocean—big time 44 square feet: A school-reopening detective story ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Topics Antarctica climate change sea-level rise oceans modeling Matt Simon Matt Simon Amit Katwala Ramin Skibba Grace Browne Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
717 | 2,016 | "Google Pixel Upends the Android Universe | WIRED" | "https://www.wired.com/2016/10/google-pixel-upends-android-universe" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Brian Barrett Gear Google Pixel Upends the Android Universe Save this story Save Save this story Save By the time Google announced its pair of Pixel smartphones on Tuesday, the devices had already been leaked all to pieces. Shape, size, specs; even color variants were laid bare by clumsy carriers. In fact, just about the only thing left unknown about the Pixels is what they’ll do to the already splintered Android ecosystem.
Google had previously released smartphones, made by a revolving set of hardware partners, under its Nexus line. But Nexus devices, despite consistent excellence, never amounted to more than a sideshow. Pixel wants to be the main event. And that should make other Android manufacturers very, very nervous.
Android has a problem worth solving. It’s a terrific operating system. The latest release, Android Nougat, is both mature and refined, full of thoughtful and responsive touches that rival anything iOS can offer. If only anyone could use it.
Two weeks after it started rolling out, Nougat had reached fewer than 0.1 percent of Android devices. In fact, the most popular Android version today remains Android KitKat. It came out in the fall of 2013.
Google Pixel Sounds Like the Android Phone of Our Dreams Google’s Going to Change the Gadget Game, But Not Like You Think Android Nougat Proves How Good Google’s OS Already Is Google Home Is Finally Ready to Rumble With Amazon Echo Android fragmentation is not a new lament, but it’s an increasingly irksome one. Updates need to route through a complex obstacle course of carriers and manufacturers, if they ever make it at all. Recent hardware releases get left behind; security issues go unpatched. User experience suffers. It’s frustrating, for users and Google alike.
Things aren’t much prettier on the hardware side.
“Android has always been a fickle master---it's been an enabler of a huge chunk of the smartphone market, but with very few exceptions it hasn't been a driver of significant margins, and the lack of differentiation between Android vendors has created something of a race to the bottom in recent years,” says Jan Dawson, chief analyst at Jackdaw Research.
It’s a bifurcated market. Samsung dominates the top end, while a raft of low-cost imports scrap it out in the affordable territories. Those who have attempted to occupy the middle, like HTC, have gotten squeezed. An open ecosystem that should promise choice has resulted in surprisingly little of it.
“The fragmented Android ecosystem is very competitive with very low margins,” says Anindya Ghose, director of the Center for Business Analytics at NYU Stern. “Even Samsung makes only 17 percent margins.” Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Nexus devices didn’t offer Google much remedy. The phones were high-quality and competitively priced, but sold primarily through Google’s own limited channels.(The exception to this was the giant, Motorola-made Nexus 6, an expensive phone with carrier partners, released just after Google’s brief Motorola ownership. It could be seen as a sort of Pixel trial run.) More to the point, Nexus phones were built through partnerships with bona fide phone-makers; they were a chance for manufacturers to show off their chops as much as for Google to show off Android. In practice, that meant many Nexus devices felt like repeats of existing third-party hardware---and carried their limitations as well. With Pixel, that changes.
Unlike the Nexus devices, the Pixel is purely Google’s vision. The company outsourced manufacturing to HTC, but everything about Pixel is made to Google’s exact specifications. Even the tagline, Made by Google, not so subtly reminds that all other Android devices are not. This is as pure a vision of what Google wants in a smartphone as has ever existed.
It also couldn’t come at a more critical time. The other tentpoles of Google’s Tuesday announcement were Google Assistant, an AI-powered helper that hopes to anticipate your every need, and Daydream, a sleek stab at creating virtual reality for the masses. These products, and others like them, aren’t just Google’s future. They’re the future of personal computing. Those stakes are too high to leave to someone else’s design team.
“Building hardware and software together lets us take full advantage of capabilities like the Google Assistant,” said Rick Osterloh, Google’s SVP of hardware. “It lets us harness years of expertise we’ve built up in machine learning and AI to deliver the simple, smart, and fast experiences that our users expect from us.” The inspiration here is clear enough that you could just as easily call it a blueprint. Apple’s unwavering command over both hardware and software let the iPhone become one of the most successful consumer products in history. The Pixel doesn’t just look like the iPhone; it aspires to its clone its success as well.
It’s the right move. A Pixel smartphone won’t just have the latest Android version and the best camera. It’s going to be the spool from which Google’s new voice-controlled ecosystem will be threaded. It’s meant to be the window through which a generation views virtual worlds.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So “It is well known that Android can be much better optimized than what it is today,” says Ghose. “By controlling the hardware, Google can dramatically improve the reliability and consistency of the user experience on Android phones.” All of which is to say that Pixel is serious. And it could be a serious problem for anyone else in the Android business.
While Pixel smartphones will create a shining example of what an Android device can be, they’ll also make life that much harder on anyone else who makes them.
“All else equal, an HTC or a Huawei would want to be a true partner rather than just a contractor,” says Ghose. “But Google does not want any other manufacturer's brand to shine on Pixel other than its own.” Being paid as a contractor is better than not making any money at all—a situation with which HTC is painfully familiar.
But even then, there are only so many Pixel contracts to go around. And everyone else making Android devices faces the prospect of being even further behind.
Aside from the obvious under-the-hood benefits of Google controlling both hardware and software, Pixel phones launch as the only smartphones with Google Assistant built in. That’s a genuine point of differentiation, especially as Google pushes its broader ecosystem with Google Home and Chromecast. Pixel phones will be more efficient, more feature-filled, and more broadly integrated than any other Android competitor, full stop. And they’ll continue to be for as long as Google makes them.
The possible upside to all of this is that it will push manufacturers like Samsung and LG and Motorola to innovate and iterate even more aggressively. We’ve seen some of that in the laptop space, after Microsoft similarly shook an industry by bringing the Surface Book in-house. In fact, we’ve already started to see it, whether it’s modular designs from LG and Motorola or continued hardware excellence---explosions aside---from Samsung.
Google, too, says it will stand by its Android partners. And it also sees Pixel as an innovation engine.
"We want to continue to see the entire Android ecosystem thrive," says Google spokesperson Iska Saric. "With these new phones, we aim to provide the best Google phone experience, which we hope will also contribute to future innovation and development of the ecosystem." Hardware manufacturers have other options, too. They could embrace an emerging operating system, though Samsung’s early adventures with Tizen don’t offer much encouragement there. Or they could turn to other product categories for relief, as HTC has virtual reality.
Whatever the future, an already brutal Android ecosystem just got even more so. The Pixel looks great, unless you happen to share its software DNA. Then it looks like a long, slow fade.
“Hardware isn’t a new area for Google, but now we’re taking steps to showcase the very best of Google across a family of devices designed and built for us,” said Osterloh. “This is a natural step, and we're in it for the long run.” This story has been updated to include comment from Google.
Executive Editor, News X Topics Android HTC Pixel smartphones Julian Chokkattu Nena Farrell Simon Hill Nena Farrell Simon Hill Brenda Stolyar Louryn Strampe Julian Chokkattu WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
718 | 2,017 | "Why Self-Driving Cars Have to Watch Their Human Passengers | WIRED" | "https://www.wired.com/2017/02/self-driving-cars-wont-just-watch-world-theyll-watch" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jack Stewart Transportation Self-Driving Cars Won't Just Watch the World—They'll Watch You Artur Debat/Getty Images Save this story Save Save this story Save It’s Monday morning, you’re late for work, and as you merge onto the freeway you see it: the sea of red brake lights. It’s going to be a slow, frustrating trip---for all the suckers who have to drive their own cars. You click yours into autonomous mode and spend the slog getting ahead on work emails, or even catching up on sleep.
Yes, the day you become a co-driver is fast approaching. But as cars master how to see, understand, and navigate the world, researchers are shifting their attention to another subject: you. Paradoxical it may seem, but the more control the car has, the more it needs to know about the person sitting behind the wheel---whether they're paying attention, their mood, even their health.
“We are making tremendous progress in instrumenting vehicles to know everything that’s happening around them, but there are just not enough sensors looking at the driver inside the car,” says Anuj Pradhan, who studies human factors at the University of Michigan’s Transportation Research Institute.
Eyeing a Self-Driving Future, Ford Drops $1B on an AI Startup Tesla’s New ‘Autopilot’ Is Just the Start of a Critical Reboot The Very Human Problem Blocking the Path to Self-Driving Cars Used to be, if you stopped paying attention while driving, you'd just crash. And in 20 or 50 years, when cars are 100 percent autonomous , whatever you're up to won't matter, because you'll have zero responsibility. Today's technology sits between those points: The robots are doing some of the work. Tesla already sells cars that drive themselves on the highway , as long as the human monitors the system, ready to take over at any moment. Next year, Audi plans to introduce a more capable system , where the driver is demoted from supervisor to understudy, necessary only when things go to pot.
A lot of the players in this business hate that idea (a bunch are avoiding that kind of system) because people are godawful backups.
They're prone to dozing off, zoning out, goofing around. But if you want an autonomous car that can roam beyond a constrained geographical zone , or that can stay on the road in less than ideal weather conditions---and you want it this decade---you're gonna need some human help.
So researchers and engineers in the autonomous space are focusing more and more attention on the human. One surprise: Smartphones can help. “Being in an autonomous car is incredibly boring, and we have a lot of people who fall asleep,” says Wendy Ju, who studies self-driving cars at Stanford’s Center for Design Research.
Distractions like texting and tweeting, super dangerous in a regular car, can be useful in a self-driving one, engaging the human's brain.
Demanding a human take the wheel is way harder when that person's sleeping. “These are things that keep you awake," Ju says. "They’re actually good.” Great---as long as car knows what the human's up to, and whether they're able to take control of the car if needed. Basic driver monitoring systems have been around for more than a decade, mostly aimed at combatting drowsy driving. In 2003, Volvo introduced its Intelligent Driver Information System, which monitors steering wheel and pedal inputs, and whether the turn signal is on. That's enough to guess if the driver's in the middle of a high stress overtaking maneuver---and it's better to automatically decline that incoming call. Some BMW models will pop up an icon of a steaming cup of coffee if steering inputs start wandering, and it seems like the driver could be nodding off. Toyota has used a camera to watch the driver's eyelids.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This new challenge invites a more Orwellian approach. Australian company Seeing Machines says its gaze tracking technology will allow cars to act as co-drivers, because they'll know what the driver has and hasn't seen. Industry supplier Pioneer wants to monitor his heart rate. Harman is working on tech that measures pupil dilation, aiming to understand cognitive load.
This richness of information is likely crucial for the semi-autonomous car, but it could also inform how truly driverless systems work. Think about when you’re a passenger in the front seat of a regular car, and the driver is speeding, or changing lanes erratically. You may tense up, frown, tug at your seatbelt---communicating you’d prefer to slow down, please, without having to say it.
Being in an autonomous car is incredibly boring, and we have a lot of people who fall asleep.
Wendy Ju, Stanford researcher Even if you're happy with the driving style, you may point out things the driver missed, or suggest a different route. These could all be possible with an autonomous car too, with the right sensors pointed at the person in what was the driver’s seat.
This goes both ways. The car’s computer can better calibrate how it talks to the driver, like with a louder warning if he is obviously distracted. It could also offer to engage autonomous systems if the road conditions look good and the driver looks tired.
Having a camera pointed at your face raises obvious privacy concerns, but Ju says it’s unlikely all that data will be collected and kept. “It would be very expensive from a bandwidth perspective to transmit video of what you’re doing in the car.” And if that changes, it may just be the price you pay for improved safety. Until your autonomous car can cope without a human all together, and you become, at long last, irrelevant.
Topics Self-Driving Cars Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
719 | 2,016 | "Facebook's Head of AI Wants to Teach Chatbots Common Sense | WIRED" | "https://www.wired.com/2016/06/facebooks-head-ai-wants-teach-chatbots-common-sense" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Klint Finley Business Facebook's Head of AI Wants to Teach Chatbots Common Sense Bryan Derballa for WIRED Save this story Save Save this story Save Facebook is already disconcertingly good at recognizing faces in photos. But the company's director of artificial intelligence research, Yann LeCun , wants to push AI even further. Today at the 2016 WIRED Business Conference , he said he wants to teach chatbots common sense.
One Genius’ Lonely Crusade to Teach a Computer Common Sense Soon We Won’t Program Computers. We’ll Train Them Like Dogs What the AI Behind AlphaGo Can Teach Us About Being Human That's an important part of Facebook's goal of enabling its Facebook M virtual assistant to actually understand the things you ask it to do. Today, Facebook M is powered in part by humans. But eventually Facebook wants to power the entire thing with AI.
LeCun is the founding father of deep learning, one of the most important branches of artificial intelligence today. Deep learning techniques are used for everything from the algorithms that filter your Facebook feed to Android's voice recognition system to Skype's cutting edge real-time translation tool. But while machines have gotten really good at recognizing voice commands and translating one human language into another, AIs still can't really understand language, LeCun explained.
Making that happen means teaching computers to learn in much the same way humans do. LeCun points out that babies learn to learn to associate words with objects by simply observing the world around them. It takes at least a couple years, but we humans are able to learn all this with relatively few examples, at least compared to the number of images that LeCun and company feed their computers. "So there's something we're missing about human and animal learning," he says. That missing thing, LeCun explains, is what we might call common sense.
To fill in that missing piece, he and his colleagues are working on what's called predictive learning. Today, the most popular way of training an AI is what's called supervised learning. Basically, if you want to teach an AI to recognize cars, you'll show it thousands or millions of pictures of cars, and eventually it will figure out the common attributes of a car—like wheels—and be able to spot cars in other photos. That's much easier than the old way doing things, which involved trying to manually program the system to recognize wheels and other common features of cars. But what LeCun and his team would rather do is let machines observe the world and figure out what cars are simply by seeing lots of them and noticing that people call them "cars." That's what humans do, after all.
Facebook is approaching these twin challenges–understanding language and predictive learning–together. LeCun explained that the company is trying to teach its AI systems to understand human language by having it essentially having it watch over the shoulders of the real humans who respond to queries on the Facebook M virtual assistant. But making it work will require more than just lots of conversations for the software to eavesdrop on. It means figuring out the LeCun and company are working hard to figure out the mathematical and conceptual pieces that are missing from their model.
And LeCun says Facebook can't do this alone. Predictive learning is more of a scientific problem than a technological one, he says. And that means doing research out in the open , the way scientists do. "Doing research in secret just doesn't work," he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Contributor X Topics artificial intelligence chatbots Facebook WIRED Business Conference 2016 Will Knight Will Knight Steven Levy Khari Johnson Will Knight Will Knight Aarian Marshall Will Bedingfield Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
720 | 2,016 | "Google PhotoScan App Makes it Easy to Scan Your Old Photos | WIRED" | "https://www.wired.com/2016/11/google-photoscan-app-scan-your-old-photos" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Tim Moynihan Gear Google Just Made It Way Easier to Scan Your Old Photos Google Save this story Save Save this story Save Google Photos is the best photo-management tool you can put on your phone, but it won’t do you any good if your favorite photos are all in a shoebox.
With a new app built to scan your prints, eliminate any glare you’d get from taking a picture of them, and keep them all straightened out in digital form, Google’s latest mobile app is good news for anyone with a bunch of packed Fotomat envelopes. It's bad news for anyone in the scanner or shoebox industries.
The new PhotoScan is a standalone app for both Android and iOS, and scanning a picture is a clever combination of manual shooting and computational photography. Once you take an initial photo of... a photo, the app recognizes the four corners of the frame and displays circular overlays on each corner of the scanned image. You then point your phone camera at each circle, create a robust scan of the image, and PhotoScan gets to work from there.
Google Unlike just shooting a smartphone photo of an image, which is a tricky dance of glare and shadows and blown-out details, the four-corner scanning process eliminates reflections and other aspects of digital deterioration. Like an old-school panorama app, PhotoScan stitches together a single image from those several overlapped photos, making sure to eliminate any glare-infected shots while evening out the overall exposure.
Once it’s captured, a photo is backed up online and added to your Google Photos library, where the app offers its standard face-recognition and manual enhancement tricks. It’ll be a great showcase for Google Photos’ facial recognition over time; the app is already really good at identifying the same person over the course of their life with its computer vision, and the onslaught of old scanned photos should be a brand-new test for the app’s impressive AI.
Along with the PhotoScan app, Google announced some new editing features for all Google Photos users: A new version of Auto Enhance that uses exposure and saturation levels inspired by pro photo editors, new controls for light and color levels, and a dozen new “looks” that go beyond your average Instagram filter by adapting their effects to the attributes of each photo.
The free PhotoScan will be available for Android and iOS starting today, while the new Google Photos updates should begin rolling out immediately as well.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Contributing Writer X Instagram Topics Apps computer vision Google Photography Simon Hill Julian Chokkattu Lauren Goode Justin Pot Simon Hill Lauren Goode Scott Gilbertson Scott Gilbertson WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
721 | 2,016 | "Apple’s ‘Differential Privacy’ Is About Collecting Your Data---But Not Your Data | WIRED" | "https://www.wired.com/2016/06/apples-differential-privacy-collecting-data" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security Apple’s ‘Differential Privacy’ Is About Collecting Your Data---But Not Your Data Senior vice president of software engineering Craig Federighi.
Justin Kaneps for WIRED Save this story Save Save this story Save Apple, like practically every mega-corporation, wants to know as much as possible about its customers. But it's also marketed itself as Silicon Valley's privacy champion, one that---unlike so many of its advertising-driven competitors---wants to know as little as possible about you. So perhaps it's no surprise that the company has now publicly boasted about its work in an obscure branch of mathematics that deals with exactly that paradox.
At the keynote address of Apple's Worldwide Developers' Conference in San Francisco on Monday, the company's senior vice president of software engineering Craig Federighi gave his familiar nod to privacy , emphasizing that Apple doesn't assemble user profiles, does end-to-end encrypt iMessage and Facetime and tries to keep as much computation as possible that involves your private information on your personal device rather than on an Apple server. But Federighi also acknowledged the growing reality that collecting user information is crucial to making good software, especially in an age of big data analysis and machine learning. The answer, he suggested rather cryptically, is "differential privacy." Differential privacy is the statistical science of trying to learn as much as possible about a group while learning as little as possible about any individual in it.
"We believe you should have great features and great privacy," Federighi told the developer crowd. "Differential privacy is a research topic in the areas of statistics and data analytics that uses hashing, subsampling and noise injection to enable...crowdsourced learning while keeping the data of individual users completely private. Apple has been doing some super-important work in this area to enable differential privacy to be deployed at scale." Differential privacy, translated from Apple-speak, is the statistical science of trying to learn as much as possible about a group while learning as little as possible about any individual in it. With differential privacy, Apple can collect and store its users’ data in a format that lets it glean useful notions about what people do, say, like and want. But it can't extract anything about a single, specific one of those people that might represent a privacy violation. And neither, in theory, could hackers or intelligence agencies.
"With a large dataset that consists of records of individuals, you might like to run a machine learning algorithm to derive statistical insights from the database as a whole, but you want to prevent some outside observer or attacker from learning anything specific about some [individual] in the data set," says Aaron Roth, a University of Pennsylvania computer science professor whom Apple's Federighi named in his keynote as having "written the book" on differential privacy. (That book, co-written with Microsoft researcher Cynthia Dwork, is the Algorithmic Foundations of Differential Privacy [PDF].
) "Differential privacy lets you gain insights from large datasets, but with a mathematical proof that no one can learn about a single individual." As Roth notes when he refers to a "mathematical proof," differential privacy doesn't merely try to obfuscate or "anonymize" users' data. That anonymization approach, he argues, tends to fail. In 2007, for instance, Netflix released a large collection of its viewers' film ratings as part of a competition to optimize its recommendations, removing people's names and other identifying details and publishing only their Netflix ratings. But researchers soon cross-referenced the Netflix data with public review data on IMDB to match up similar patterns of recommendations between the sites and add names back into Netflix's supposedly anonymous database.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That sort of de-anonymizing trick has countermeasures---say, removing the titles of the Netflix films and keeping only their genre. But there's never a guarantee that some other clever trick or cross-referenced data couldn't undo that obfuscation. "If you start to remove people’s names from data, it doesn’t stop people from doing clever cross-referencing," says Roth. "That’s the kind of thing that's provably prevented by differential privacy." Differential privacy, Roth explains, seeks to mathematically prove that a certain form of data analysis can't reveal anything about an individual---that the output of an algorithm remains identical with and without the input containing any given person's private data. "You might do something more clever than the people before to anonymize your data set, but someone more clever than you might come around tomorrow and de-anonymize it," says Roth. "Differential privacy, because it has a provable guarantee, breaks that loop. It’s future proof." Here’s Everything Apple Announced at WWDC 2016 WWDC 2016: Apple Shows Off Oodles of Software Upgrades All the New Features Coming to Your Mac Desktop This Fall Federighi's emphasis on differential privacy likely means Apple is actually sending more of your data than ever off of your device to its servers for analysis, just as Google and Facebook and every other data-hungry tech firm does. But Federighi implies that Apple is only transmitting that data in a transformed, differentially private form. In fact, Federighi named three of those transformations: Hashing , a cryptographic function that irreversibly turns data into a unique string of random-looking characters; subsampling, or taking only a portion of the data; and noise injection, adding random data that obscures the real, sensitive personal information. (As an example of that last method, Microsoft's Dwork points to the technique in which a survey asks if the respondent has ever, say, broken a law. But first, the survey asks them to flip a coin. If the result is tails, they should answer honestly. If the result is heads, they're instructed to flip the coin again and then answer "yes" for heads or "no" for tails. The resulting random noise can be subtracted from the results with a bit of algebra , and every respondent is protected from punishment if they admitted to lawbreaking.) When WIRED asked for more information on how it applies differential privacy, an Apple representative responded only by referring to the iOS 10 preview guide, which described how the techniques will be used in the latest version of Apple's mobile operating system: Starting with iOS 10, Apple is using Differential Privacy technology to help discover the usage patterns of a large number of users without compromising individual privacy. To obscure an individual’s identity, Differential Privacy adds mathematical noise to a small sample of the individual’s usage pattern. As more people share the same pattern, general patterns begin to emerge, which can inform and enhance the user experience. In iOS 10, this technology will help improve QuickType and emoji suggestions, Spotlight deep link suggestions and Lookup Hints in Notes.
Whether Apple is using differential privacy techniques with the rigor necessary to fully protect its customers' privacy, of course, is another question. In his keynote, Federighi said that Apple had given the University of Pennyslvania's Roth a "quick peek" at its implementation of the mathematical techniques it used. But Roth told WIRED he couldn't comment on anything specific that Apple's doing with differential privacy. Instead, much like the techniques he's helped to study and invent, Roth offered a general takeaway that successfully avoided revealing any details: "I think they’re doing it right." Senior Writer X Topics apple hacks ios Siri WWDC Threat Level Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
722 | 2,016 | "Microsoft Neural Net Shows Deep Learning Can Get Way Deeper | WIRED" | "https://www.wired.com/2016/01/microsoft-neural-net-shows-deep-learning-can-get-way-deeper" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Microsoft Neural Net Shows Deep Learning Can Get Way Deeper Paul Taylor/Getty Images Save this story Save Save this story Save Computer vision is now a part of everyday life. Facebook recognizes faces in the photos you post to the popular social network.
The Google Photos app can find images buried in your collection, identifying everything from dogs to birthday parties to gravestones.
Twitter can pinpoint pornographic images without help from human curators.
All of this "seeing" stems from a remarkably effective breed of artificial intelligence called deep learning.
But as far as this much-hyped technology has come in recent years, a new experiment from Microsoft Research shows it's only getting started. Deep learning can go so much deeper.
'We're staring at a huge design space, trying to figure out where to go next.' Peter Lee, Microsoft Research This revolution in computer vision was a long time coming. A key turning point came in 2012, when artificial intelligence researchers from the University of Toronto won a competition called ImageNet.
ImageNet pits machines against each other in an image recognition contest---which computer can identify cats or cars or clouds more accurately?---and that year, the Toronto team, including researcher Alex Krizhevsky and professor Geoff Hinton, topped the contest using deep neural nets, a technology that learns to identify images by examining enormous numbers of them, rather than identifying images according to rules diligently hand-coded by humans.
Toronto's win provided a roadmap for the future of deep learning. In the years since, the biggest names on the 'net---including Facebook, Google, Twitter, and Microsoft---have used similar tech to build computer vision systems that can match and even surpass humans. "We can't claim that our system 'sees' like a person does," says Peter Lee, the head of research at Microsoft. "But what we can say is that for very specific, narrowly defined tasks, we can learn to be as good as humans." Roughly speaking, neural nets use hardware and software to approximate the web of neurons in the human brain. This idea dates to the 1980s, but in 2012, Krizhevsky and Hinton advanced the technology by running their neural nets atop graphics processing units, or GPUs. These specialized chips were originally designed to render images for games and other highly graphical software, but as it turns out, they're also suited to the kind of math that drives neural nets. Google, Facebook, Twitter, Microsoft, and so many others now use GPU-powered-AI to handle image recognition and so many others tasks, from Internet search to security. Krizhevsky and Hinton joined the staff at Google.
Deep learning can go so much deeper.
Now, the latest ImageNet winner is pointing to what could be another step in the evolution of computer vision---and the wider field of artificial intelligence. Last month, a team of Microsoft researchers took the ImageNet crown using a new approach they call a deep residual network. The name doesn't quite describe it. They've designed a neural net that's significantly more complex than typical designs---one that spans 152 layers of mathematical operations, compared to the typical six or seven. It shows that, in the years to come, companies like Microsoft will be able to use vast clusters of GPUs and other specialized chips to significantly improve not only image recognition but other AI services, including systems that recognize speech and even understand language as we humans naturally speak it.
In other words, deep learning is nowhere close to reaching its potential. "We're staring at a huge design space," Lee says, "trying to figure out where to go next." Deep neural networks are arranged in layers. Each layer is a different set of mathematical operations---aka algorithms. The output of one layer becomes the input of the next. Loosely speaking, if a neural network is designed for image recognition, one layer will look for a particular set of features in an image---edges or angles or shapes or textures or the like---and the next will look for another set. These layers are what make these neural networks deep. "Generally speaking, if you make these networks deeper, it becomes easier for them to learn," says Alex Berg, a researcher at the University of North Carolina who helps oversee the ImageNet competition.
Constructing this kind of mega-neural net is flat-out difficult.
Today, a typical neural network includes six or seven layers. Some might extend to 20 or even 30. But the Microsoft team, led by researcher Jian Sun, just expanded that to 152. In essence, this neural net is better at recognizing images because it can examine more features. "There is a lot more subtlety that can be learned," Lee says.
In the past, according Lee and researchers outside of Microsoft, this sort of very deep neural net wasn't feasible. Part of the problem was that as your mathematical signal moved from layer to layer, it became diluted and tended to fade. As Lee explains, Microsoft solved this problem by building a neural net that skips certain layers when it doesn't need them, but uses them when it does. "When you do this kind of skipping, you're able to preserve the strength of the signal much further," Lee says, "and this is turning out to have a tremendous, beneficial impact on accuracy." Berg says that this is an notable departure from previous systems, and he believes that others companies and researchers will follow suit.
The other issue is that constructing this kind of mega-neural net is tremendously difficult. Landing on a particular set of algorithms---determining how each layer should operate and how it should talk to the next layer---is an almost epic task. But Microsoft has a trick here, too. It has designed a computing system that can help build these networks.
As Jian Sun explains it, researchers can identify a promising arrangement for massive neural networks, and then the system can cycle through a range of similar possibilities until it settles on this best one. "In most cases, after a number of tries, the researchers learn [something], reflect, and make a new decision on the next try," he says. "You can view this as 'human-assisted search.'" Microsoft has designed a computing system that can help build these networks.
According to Adam Gibson---the chief researcher at deep learning startup Skymind ---this kind of thing is getting more common. It's called "hyper parameter optimization." "People can just spin up a cluster [of machines], run 10 models at once, find out which one works best and use that," Gibson says. "They can input some baseline parameter---based on intuition---and the machines kind of homes in on what the best solution is." As Gibson notes, last year Twitter acquired a company, Whetlab , that offers similar ways of "optimizing" neural networks.
As Peter Lee and Jian Sun describe it, such an approach isn't exactly "brute forcing" the problem. "With very very large amounts of compute resources, one could fantasize about a gigantic 'natural selection' setup where evolutionary forces help direct a brute-force search through a huge space of possibilities," Lee says. "The world doesn't have those computing resources available for such a thing...For now, we will still depend on really smart researchers like Jian." Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine Facebook Aims Its AI at the Game No Computer Can Crack Google Made a Chatbot That Debates the Meaning of Life But Lee does say that, thanks to new techniques and computer data centers filled with GPU machines, the realm of possibilities for deep learning are enormous. A big part of the company's task is just finding the time and the computing power needed to explore these possibilities. "This work has dramatically exploded the design space. The amount of ground to cover, in terms of scientific investigation, has become exponentially larger," Lee says. And this extends well beyond image recognition, into speech recognition, natural language understanding, and other tasks.
As Lee explains, that's one reason Microsoft is not only pushing to improve the power of its GPUs clusters, but exploring the use of other specialized processors , including FPGAs---chips that can programmed for particular tasks, such as deep learning. "There has also been an explosion in demand for much more experimental hardware platforms from our researchers," he says. And this work is sending ripples across the wider of world of tech and artificial intelligence. This past summer, in its largest ever acquisition deal, Intel agreed to buy Altera, which specializes in FPGAs.
Indeed, Gibson says that deep learning has become more of "a hardware problem." Yes, we still need top researchers to guide the creation of neural networks, but more and more, finding new paths is a matter of brute-forcing new algorithms across ever more powerful collections of hardware. As Gibson point out, though these deep neural nets work extremely well, we don't quite know why they work. The trick lies in finding the complex combination of algorithms that work the best. More and better hardware can shorten the path.
The end result is that the companies that can build the most powerful networks of hardware are the companies that will come out ahead. That would be Google and Facebook and Microsoft. Those that are good at deep learning today will only get better.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence deep learning Enterprise Microsoft neural networks Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
723 | 2,016 | "Intel Reinvents Itself to Stay King in a Changing World | WIRED" | "https://www.wired.com/2016/08/intel-remakes-changing-world" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Intel Reinvents Itself to Stay King in a Changing World Tyrone Siu/REUTERS Save this story Save Save this story Save Intel is bigger than all but 50 other U.S. companies, and that's because of something called the CPU.
If you were around in the '90s or the early aughts, you saw the TV ads.
Intel Inside.
For decades, Intel has supplied a majority of the chips that sit at the heart of our personal computers, including desktops as well as laptops. These chips are called central processing units, CPUs for short. They handle most all of the digital calculations that drive our PCs.
They also handle most of the calculations inside the millions upon millions of computer servers that run Internet services like Google Search, Facebook, Amazon, and Twitter. And Intel came to dominate this market too. It now builds 99 percent of all CPUs that wind up in a computer server, according to research firm IDC. When you use the Internet, you use Intel.
But the chip market is now shifting in new directions. And as it shifts, Intel is remaking itself in an effort to stay on top of the heap.
The world's largest chip maker somehow missed the shift away from the PC and toward the smartphone.
Other chip makers supply most of the silicon at the heart of our phones. But Intel now sees that the game is changing on the Internet as well. To run their myriad online services---operations of unprecedented size and complexity---Internet giants like Google, Facebook, and Microsoft need more than just CPUs inside their millions of servers. They're using all sorts of alternative chips to accelerate particular technologies, most notably the new breed of artificial intelligence.
So, Intel is remaking itself as a company that can build these chips too.
Last summer, Intel paid $16.7 billion to acquire Altera , a company whose programmable chips, known as FPGAs, help choose search results inside Bing , the Microsoft search engine. This was the largest acquisition in the history of Intel. And then, earlier this week, the company agreed to acquire Nervana, a startup building chips just for deep neural networks, AI services that can learn tasks by analyzing enormous amounts of data. At Google, Facebook, and so many others, deep neural nets are now recognizing photos, identifying spoken words, and translating from one language to another---among other tasks---and that's why Intel paid an apparent $408 million for Nervana.
"We're now at the precipice of the next big wave of growth," says Intel vice president Jason Waxman, "and that's going to be driven by artificial intelligence." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These hefty buys show just how rapidly the global chip game is changing. As Microsoft explores FPGAs as a way of accelerating search, it's training deep neural networks with massive farms of GPUs, chips originally built for rendering images for games and other highly graphical software.
So many other Internet companies are doing the same.
And at Google, engineers have gone so far as to build their own alternative chip , dubbed the TPU. After GPUs help train a neural network to, say, recognize faces in photos, Google's TPUs help execute this neural network, putting it to use in the real world.
It's an attack of the geek acronyms. FPGAs. GPUs. TPUs. And certainly, keeping it all straight is far from easy. But the trend isn't hard to see. The world is moving onto Internet services, and these Internet services now require many chips beyond the classic CPU.
As Microsoft vice president Peter Lee explains it, our Internet services are evolving more quickly than our CPUs. CPUs continue to mature according to Moore's Law , getting faster every two years or so, but that's not enough to accommodate the rise of deep learning. Nor can it handle the tremendous growth of our online services. So, we need chips that can handle "post-CPU workloads," in the words of Lee, who oversees a new Microsoft Research operation called NExT.
"Increasingly, we're looking at more specialized hardware," he says.
It's significant that so many acronyms are in the mix. We're at the beginning of this movement, with Internet companies exploring so many possibilities, and it's unclear how things will eventually pan out. Will the market settle on one or two chips? Or it will it be more? And what will those be? It's telling that Intel has acquired not just Altera, but Nervana too. Wherever the market goes, it wants to be there.
Senior Writer X Topics Enterprise Intel Microsoft Will Knight Paresh Dave Niamh Rowe Will Knight Paresh Dave Will Knight Khari Johnson Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
724 | 2,016 | "The Epic Story of Dropbox's Exodus From the Amazon Cloud Empire | WIRED" | "https://www.wired.com/2016/03/epic-story-dropboxs-exodus-amazon-cloud-empire" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Epic Story of Dropbox's Exodus From the Amazon Cloud Empire Save this story Save Save this story Save If you're one of 500 million people who use Dropbox, it's just a folder on your computer desktop that lets you easily store files on the Internet, send them to others, and synchronize them across your laptop, phone, and tablet. You use this folder, then you forget it. And that's by design. Peer behind that folder, however, and you'll discover an epic feat of engineering. Dropbox runs atop a sweeping network of machines whose evolution epitomizes the forces that have transformed the heart of the Internet over the past decade. And today, this system entered a remarkable new stage of existence.
In fleeing the cloud, Dropbox is showing why the cloud is so powerful. It too is building infrastructure so that others don't have to.
For the first eight years of its life, you see, Dropbox stored billions and billions of files on behalf of those 500 million computer users. But, well, the San Francisco startup didn't really store them on its own. Like so many other tech startups in recent years, Dropbox ran its online operation atop what is commonly called "the Amazon cloud," a hugely popular service run by, yes, that Amazon---the world's largest online retailer. Amazon's cloud computing service lets anyone build and operate software without setting up their own hardware. In other words, those billions of files were stored on Amazon's machines, rather than machines owned and operated by Dropbox.
But not anymore. Over the last two-and-a-half years, Dropbox built its own vast computer network and shifted its service onto a new breed of machines designed by its own engineers, all orchestrated by a software system built by its own programmers with a brand new programming language. Drawing on the experience of Silicon Valley veterans who erected similar technology inside Internet giants like Google and Facebook and Twitter, it has successfully moved about 90 percent of those files onto this new online empire.
It's a feat of extreme engineering, to be sure. But the significance of this move extends well beyond Dropbox. Rather ironically, it highlights how cloud computing is rapidly transforming the way businesses operate. And at the same time, it reveals some enormous changes that have swept the worldwide hardware market over the last ten years.
Christie Hemm Klok/WIRED Today, more and more companies are moving onto "the cloud"---not off. By 2020, according to Forrester, cloud computing will be a $191 billion market , with giants like Google and Microsoft challenging Amazon with their own cloud services. Amazon, which declined to comment for this story, just reported $2.41 billion in revenue for its Amazon Web Services division during the fourth quarter of last year, or more than $9.6 billion in annualized sales---and that's pretty much after Dropbox's move.
But some companies get so big, it actually makes sense to build their own network with their own custom tech and, yes, abandon the cloud. Amazon and Google and Microsoft can keep cloud prices low, thanks to economies of scale. But they aren't selling their services at cost. "Nobody is running a cloud business as a charity," says Dropbox vice president of engineering and ex-Facebooker Aditya Agarwal.
"There is some margin somewhere." If you're big enough, you can save tremendous amounts of money by cutting out the cloud and all the other fat. Dropbox says it's now that big.
That said, building a network of this size is a ridiculously difficult task. And it's certainly not for everyone. "The right answer is to actually not do this yourself," says Urs Hölzle, the former University of California, Santa Barbara, professor who, as Google employee number eight, oversaw the creation of the company's global network and now helps run its cloud computing services. Most companies, he explains, lack the size and the sophistication needed to reach those economies of scale. And if a company's growth stalls, a move like this could come back to haunt it. This point is particularly relevant with Dropbox. In recent months, pundits and investors have turned sour on the San Francisco-based company, saying that its $10 billion valuation is all out of whack and that it's been slow to attract real business customers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Hölzle acknowledges that for some companies, the move still makes sense. And at least for right now, Dropbox is one of those companies. According to chief operating officer Dennis Woodside, the company gets "substantial economic value" by running its own operation. The irony is that in fleeing the cloud, Dropbox is showing why the cloud is so powerful. It too is building infrastructure so that others don't have to. It too is, well, a cloud company. And in moving onto its own vast network, Dropbox is joining giants like Amazon and Google and Microsoft in pushing the worldwide hardware market---and all of information technology---in an entirely new direction.
Amazon dominates the primary cloud computing market. And its primary competitors are Google and Microsoft. All three offer services that let businesses and independent coders build and run whatever software they want without setting up their own hardware. And all three bring the leverage you'll only see in the world's largest tech companies.
Christie Hemm Klok/WIRED At the same time, there's a growing secondary market centered around Dropbox, its arch-rival Box.com, Saleforce.com, Workday, and others. These companies fit into a different niche---offering pre-built software applications over the Internet. Like the bigger companies, they too deliver tools that businesses and developers can use without setting up their own hardware---the essential appeal of the cloud. "The next major era for this industry is a battle of platforms," says Aaron Levie, the CEO of Box.com. "What are the next platforms that enterprises are going to build on top of?" Dropbox wants to be one of them. And so it has taken a big chance on building a cloud of its own. But this won't be easy. The company will face increasing competition from Amazon and Google and Microsoft as they continue to expand into pre-built software. In fact, these giants are already challenging the likes of Dropbox and Box with their own file-sharing tools. And the file-sharing market will likely to be less expansive in the future. The sharing of discrete files---standalone photos and videos and Word docs and spreadsheets---is becoming less important. Files aren't at the center of how we use our smartphones. And with always-on messaging and collaboration services like Slack, the file is becoming less of a focal point on the desktop as well.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Dropbox knows all this. Its enormously high valuation has made it a target for pundits and investors decrying the rise of the "unicorns." In recent months, no startup has received more heat than Dropbox, with many questioning its ability to compete in the business world against the giants of the Internet. Judging from extensive conversations with executives at the company, it's clear that Dropbox very much realizes the world is changing. The question is whether---after all the time, money, and effort it's spent moving itself onto its own global network---its own changes are in sync with where the world is headed.
Christie Hemm Klok/WIRED James Cowling knew the creators of Dropbox from his days at MIT. As a graduate student at the university, he focused on distributed systems---computing systems that run across dozens, hundreds, or even thousands of machines---and he studied with some of the earliest Dropbox employees. That's how he met Drew Houston, the Dropbox co-founder and CEO. As Dropbox grew, they kept in touch, and here and there, they mulled the hows and whys of a Dropbox that could operate on its own, without Amazon. "It seemed a moonshot," Cowling says.
In 2012, Cowling says, Google---the Internet's most moonshot-driven company---offered him a spot on the engineering team that oversees Spanner , the global database that drives so much of the search giant's online operation. Spanner is probably the largest and most complex single database on Earth---one of the most distributed of distributed systems. But instead, Cowling went to work at Dropbox. "I wanted to build something," Cowling says. Spanner already existed. The Dropbox empire did not.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For most of its existence, Dropbox ran partly on Amazon and partly off. If a bunch of people shared some files via Dropbox, the company stored the files on Amazon's Simple Storage Service, or S3, while housing all the metadata related to those files---who they belonged to, who was allowed to download them, and more---on its own machines inside its own data center space.
Working alongside vice president of infrastructure Akhil Gupta , an ex-Googler, and others, Cowling designed a sweeping software system that would allow Dropbox to store hundreds of petabytes of data---enough data to fill hundreds of millions of USB thumb drives---and store it far more efficiently than the company ever did on Amazon S3. They called this system Magic Pocket. "Dropbox was envisioned as a place where you keep all your stuff, it doesn't get lost, and you can always access it." Gupta says. "A magic pocket." Christie Hemm Klok/WIRED In essence, they built their own Amazon S3---except they tailored their software to their own particular technical problems. "We haven't built a like-for-like replacement," Agarwal says. "We've built something that is customized for us." Even while Dropbox was still on Amazon, the online retailer was also starting to act as a competitor to Dropbox, offering its own file-sharing service---an obvious concern for the smaller company, though Amazon's version of this particular service lacks the user-friendliness and sheer brand recognition of Dropbox's ubiquitous blue folder. But according to Agarwal, the main reason for moving off the Amazon cloud is raw economics---not politics. "You have to think of these large [tech] players as countries---friendly neighbors, though there might be some skirmishes going on here and there," he says. "Amazon is many things, but I don't think their primary thing is being a cloud storage provider like us." He'd better hope so. Because Dropbox has truly gone all-in. Yes, it created its own software for its own needs. But it also went a step further. The company tailored its hardware as well. Dropbox designed its own computers.
For years, Internet giants like Google, Facebook, Microsoft, and Amazon have designed their own data center hardware--- computer servers , networking switches , and, in some cases, hardware for storing massive amounts of data.
These companies had no choice but to build all this stuff: Their online empires grew so large that using traditional gear was just too expensive and too difficult. They needed a new breed of hardware that was cheaper, more streamlined, and more malleable. So they built it, working alongside hardware manufacturers and parts suppliers in Asia and elsewhere.
Today, Google builds more servers than almost anyone on Earth---and it doesn't even sell servers. Much the same goes for Amazon and Microsoft. And since those companies also run cloud computing services, many other businesses are now running their software on machines forged outside the grip of traditional hardware vendors. This is particularly true after Facebook open sourced the designs for its custom-built gear. Now a bunch of vendors, including Asian manufacturers like Quanta , sell stuff that's based on Facebook hardware.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Rami Aljamal witnessed this movement firsthand. He built this new breed of streamlined machine inside Twitter and at the new DCS arm of Dell---an effort to recapture some of the market the company lost when companies like Google started designing their own hardware. Now, he designs machines at Dropbox. Like Google and Amazon and Microsoft, Dropbox decided it needed machines that fit its unique needs.
Dropbox stores enormous amounts of data, so it needed machines suited to that task. And that's what Aljamal and his team built, working out of a lab inside Dropbox headquarters in San Francisco just across from AT&T Park, home of the Giants. They call these machines Diskotech. "The thing we care about the most is the disk," says Aljamal. "That's where all the bytes are." Measuring only one-and-half-feet by three-and-half-feet by six inches, each Diskotech box holds as much as a petabyte of data, or a million gigabytes. Just 50 of these machines could store everything human beings have ever written.
Cowling and crew started work on the Magic Pocket software in the summer of 2013 and spent about six months building the initial code. But this was a comparatively small step. Once the system was built, they had to make sure it worked. They had to get it onto thousands of machines inside multiple data centers. They had to tailor the software to their new hardware. And, yes, they had to get all that data off of Amazon.
Christie Hemm Klok/WIRED The whole process took two years. A project like this, needless to say, is a technical challenge. But it's also a logistical challenge.
Moving that much data across the Internet is one thing. Moving that many machines into data centers is another. And they had to do both, as Dropbox continued to serve hundreds of millions of people. "It's like a moving car," says Dan Williams, a former Facebook network engineer who oversaw much of the physical expansion, "and you want to be able to change a tire while still driving." In other words, while making all these changes, Dropbox couldn't very well shut itself down. It couldn't tell the hundreds of millions of users who relied on Dropbox that their files were temporarily unavailable. Ironically, one of the best measures of success for this massive undertaking would be that users wouldn't notice it had happened at all.
Once Cowling and crew built the initial code, they tested it on a network of pretty standard hardware---a kind of shadow version of Dropbox that juggled roughly 20 percent of the data that was housed on Amazon. They vowed to test the code for 180 days without finding a major bug, even hanging a countdown clock on the wall at Dropbox HQ. And when a bug turned up after two months---a bug that could have seen data stored in the wrong place---they reset the clock. In all, the testing took eight months.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Confident the system could run all of Dropbox, the team then moved the code on to more and more systems while copying more and more data from Amazon. Its main contracts with Amazon were set to expire in another six months, and the Dropbox braintrust resolved to complete the move by then, so that the company wouldn't have to re-up. "There was a very short amount of time to open up the parachute," Cowling says.
Just getting the bits out of Amazon and into other data centers was an epic task. Digitally moving petabytes of data from one machine to another isn't exactly on the same scale as downloading a few songs for your laptop. Even the fattest Internet pipes only have so much bandwidth. Transferring four petabytes of data, it turned out, took about a day. "You're restricted by the speed of light," Agarwal says.
Meanwhile, computers must be moved into data centers and set up to receive all those bits. Picture the IT guy in your office trying to set up a new employee's computer---but on the scale of Dropbox. And all that physical effort came with a time limit. If they couldn't get the systems into the data centers fast enough, they couldn't get the data off of Amazon fast enough. The company was installing forty to fifty racks of hardware a day, each rack holding about eight individual machines. At one point, they were slowed by some ill-timed crashes---and not the computer kind. In one twenty-four hour period, trucks carrying machines to Dropbox data centers in different parts of the country both had accidents.
Despite those accidents and everything else, Dropbox made its deadline. And it dropped those contracts with Amazon. The company continues to use the Amazon cloud in Europe---just because the business is growing in a less predictable way in Europe---but Gupta and team had moved ninety percent of all files into Dropbox data centers. And then came the really extreme engineering.
As all that data streamed off the Amazon cloud, hardware engineer Rami Aljamal pow-wowed with a coder named Jamie Turner. Magic Pocket---Dropbox's version of Amazon's file-storage system---was still running on run-of-the-mill machines. The next step was to move it onto the company's custom-built hardware. Aljamal and Turner, an English major turned engineer who is now a veteran of multiple tech startups, joined forces to ensure this new hardware dovetailed with the software. Aljamal and his hardware engineers designed a single machine, Diskotech, that could hold a petabyte of data. But there was a problem. The Magic Pocket software didn’t quite fit this new hardware. So Turner rebuilt Magic Pocket in an entirely different programming language.
Christie Hemm Klok/WIRED That may seem odd. Why put the code onto thousands of machines only to change the code and put it onto thousands of other machines? But in the largest Internet data centers, this is just how things work. Machines get old quickly. Parts fail constantly. And then you replace them. You're always upgrading what you have. First, Dropbox made sure that Magic Pocket ran on ordinary gear---which was hard enough. Then it honed its hardware. Then it had to make sure the two worked well together.
Crowling, Turner, and others originally built Magic Pocket using a new programming language from Google called Go.
Here too, Dropbox is riding a much larger trend, languages designed specifically for the new world of massively distributed online systems. Apple has one called Swift , Mozilla makes one called Rust, and there's an independent one called D.
All these languages let coders build software quickly that runs quickly---even executed across hundreds or thousands of machines.
Christie Hemm Klok/WIRED But Go's "memory footprint"---the amount of computer memory it demands while running Magic Pocket---was too high for the massive storage systems the company was trying to build. Dropbox needed a language that would take up less space in memory, because so much memory would be filled with all those files streaming onto the machine. So, in the middle of this two-and-half-year project, they switched to Rust on the Diskotech machines. And that's what Dropbox is now pushing into its data centers.
It is extreme. But now that companies like Google and Amazon and Dropbox have gone through this kind of thing, most others don't have to. That's the power of cloud computing. No, Dropbox isn't Google or Amazon. It doesn't offer raw computing power and infrastructure that lets coders and businesses build and run any software they like. But it does let individuals and businesses share and store files without setting up dedicated hardware---which, as businesses grow, becomes harder and harder from them to do. Sharing, the company hopes, will become a platform. That's why Dropbox has created an online text editor and collaboration tool called Dropbox Paper. Outside developers, from Microsoft on down, can plug their own apps into its service as well.
The danger is that as Amazon and Google and Microsoft expand their own services, they will restrict the growth of Dropbox. In that case, the company's move into its own data centers could become more of a burden than a blessing. Famously, when San Francisco social gaming company Zynga reached its own hypergrowth phase, the company moved off of the cloud and into its own data centers. But then its business imploded, and it was left with infrastructure it didn't really need. It's now back on Amazon.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Christie Hemm Klok/WIRED For Dropbox, one advantage is that people like Agarwal and Gupta and Williams and Sordal have all played the game before, and they've played it at the companies who play it best. Dan Williams says there's a buzz that comes from this extreme engineering. "If you've experienced anything in your past like a Facebook or a Google, you sort of get addicted to that hypergrowth," Williams says. "You miss it when you don't feel it.
" Dell. EMC. HP. Cisco. These Tech Giants Are the Walking Dead Revealed: The Secret Gear Connecting Google’s Online Empire Dropbox’s New ‘Paper’ Is Yet Another All-in-One Work Tool That’s not an empty thing. It's a buzz that can save a company millions upon millions of dollars. But like any addiction, this one comes with its own perils. It can lead to what those in the Valley call Not Invented Here Syndrome, where companies start building all sorts of new stuff just because they're intent on building all sorts of new stuff.
Whether it creates the kind of business Dropbox is hoping to build, or it just ends up as a huge engineering high, the company now has its own invention. Dropbox has built its own box. This represents an attitude that began with Google and has gradually spread across Silicon Valley. Google was so successful not just because it built a pretty good Internet search engine, but because it built the underlying technology needed to run that search engine---and so many other services---at an enormous scale. Facebook, which recruited countless ex-Googlers, did much the same. And so did Twitter and its ex-Googlers. And, now, so has Dropbox. To become a giant, you may have to stand on the shoulders of others. But once you become your own giant, you start to feel like you need to build a home that's just right for you.
Senior Writer X Topics Amazon Cloud Computing Dropbox Enterprise Google Kari McMahon Will Knight Andy Greenberg Amit Katwala Joel Khalili David Gilbert Joel Khalili Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
725 | 2,016 | "Netflix Isn't Made for the US Anymore—It's for the Whole World | WIRED" | "https://www.wired.com/2016/01/in-the-us-were-now-watching-the-worlds-netflix" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Business Netflix Isn't Made for the US Anymore—It's for the Whole World The Ridiculous Six Ursula Coyote/Netflix Save this story Save Save this story Save The latest Adam Sandler movie, The Ridiculous 6 , has a one-star rating on Netflix. It receives a rare 0 percent score on review-aggregator Rotten Tomatoes. The regrettable portmanteau “Pocahontits” is spoken less than five minutes in. It is a bad movie. It also was, within 30 days of its release, the most-streamed movie in the company's history. This is how it goes in Netflix World.
Netflix has long had a global focus; it started the year in 60 countries, after all. And you can already see its international efforts reflected in some of its shows. Netflix executives point out that 80 percent of the dialog in Narcos is Spanish, and Adam Sandler “travels well,” which is to say, his appeal crosses borders, if not critical thresholds.
As Netflix continues to broaden its reach, it will provide an experience that’s more Singapore than South Carolina.
But with last week’s expansion into 130 international markets , Netflix now plays to practically every country on Earth not named China. It’s the first truly global content network, which has serious implications for the shows and movies it makes, and for how you watch them---especially if, as an American, you're used to thinking of yourself as the center of the tech world.
“I’ve been getting hourly updates since we turned it on,” says Netflix product chief Neil Hunt. The “it” in this case is Netflix, become available to millions and millions of potential new customers around the world on Jan. 6.
Updates like that, the kind that tell Hunt who’s signing up, where they live, what they’re watching, and how much time they spend deciding what to watch, fuel Netflix's operations. By now, you probably know the company bases programming decisions—which increasingly means investmenting in original content—largely on what the data says customers like to watch. The same applies to the user interface, which is consistent worldwide with a few exceptions, mostly to accommodate countries that read right-to-left.
“We’re very much focused on building a global product, in part because we think it’s much more efficient,” says Hunt. “Even though the content library is distinct in each country, the principles we use are pretty universal.” Those principles are shaped almost entirely by new users. Current subscribers aren’t used as guinea pigs, Hunt says, because that doesn’t measure the effectiveness of a new product, only the change itself.
Your Netflix experience is largely determined by people approaching the service for the first time.
It’s hard to grasp the significance of this until you throw some numbers into the mix. In the last 12 months alone, Netflix ran 160 A/B tests, each representing two to 20 different experiences. The experiments focus on everything from how to enlist new users to UI adjustments (how big should the thumbnail be, and what should it show?) to algorithmic tweaks that determine what content surfaces for which audience.
Your Netflix experience, in other words, is largely determined by people approaching the service for the first time. Every minute change has been vetted by the responses of millions of new users---most of whom, it turns out, aren’t from the United States, or even the Western hemisphere. “In the last several quarters we’ve had two to three times more new international customers [than stateside],” says Hunt. “The choices we’ve made over the last year have been biased toward Europe and Japan.” Now imagine what happens to that multiple when you switch on 130 new countries all at once.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “It’s entirely possible that Netflix will see significantly faster growth in international subscribers with this new expansion, perhaps up to 20 million in a year,” says Jan Dawson, founder of Jackdaw Research. That would double 2015’s international growth, each new first-time sign-in helping shape what Netflix looks like for the rest of us.
Some further perspective: As of its most recent quarterly earnings report, Netflix had 37 million total US subscribers, growing at a rate of a little under four million per year. Its 16 million international subscribers were a little less than half that. At those rates, not only will the international new-subscriber numbers dwarf the domestic gains, it won't be long at all before Netflix has more total users outside US borders than within.
If a user interface could ever be described as a melting pot, Netflix has made just that. And as it continues to broaden its reach, it will provide an experience that’s more Singapore than South Carolina.
The rise of Netflix World also has serious implications for those who consider how you scroll and click through the service secondary to what they watch. As Netflix increasingly invests in original content, it's aiming for shows and movies, like Narcos and Ridiculous 6 , that transcend cultural specificity. That’s already happening, however, at least in part as a happy accident of universal taste.
What's important isn’t where a show comes from, or even that it's particularly good. What’s important is that it attracts and retains new users in 190 countries.
“Part of what’s been fascinating for us is that the 60 markets we’ve been in, the content that succeeds tends to be somewhat consistent,” says Elizabeth Bradley, VP of content acquisition. “We tend to find it’s not like one set of titles does well in America and not overseas. … We think we can tell global stories that will resonate the same.” That ambition is not, of course, unique to Netflix. “Studios have been ‘programming globally’ since their inception. They’ve been thinking globally since the end of World War I,” says Jennifer Holt, an associate professor of film and media studies at UCSB. “It’s not a new idea.” What might be new, though, or at least to Netflix’s current advantage, is that a dedicated streaming service faces far fewer limitations. “Studios had a finite number of screens," says Holt. "Netflix doesn’t.” Netflix May Never Break Into China The Counterintuitive Tech Behind Netflix’s Worldwide Launch Netflix Just Launched in 130 New Countries. Like, This Morning The opportunity this affords Netflix is that rather than focusing on a few potential worldwide blockbusters a year, it can churn out a near-infinite variety of programs and films to see what sticks. You can see that intent reflected in the sheer scope of its home-grown efforts. It will release 31 new and returning original series, two dozen feature films and documentaries, and 30 original children’s series this year alone. Crucially, every one of those programs will be available “at the same time to members everywhere.” “Netflix is trying to be international in the same way as the big movie studios, but it fills in niches that those studios aren’t,” says Matt Jordan, associate professor of media studies at Penn State.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Some of those gaps are filled by lowest common denominator fare, like the four-film Sandler deal , or schlocky oddities like Hemlock Grove , and others by critical darlings like Narcos and Beasts of No Nation.
What may be most exciting is that these productions aren't just quality-agnostic. They can also create new international avenues of influence.
"American films have influenced other cultures for a long time, clearly, and as Netflix helps bring international work to an American audience, the influence now can run the other way," says Grant McCracken, a cultural anthropologist who has worked directly with Netflix and several other brands. "Not that Netflix is the only player here, but the deep Netflix catalog really opens up possibilities that the local art house cinema couldn't hope to deliver." Ultimately for Netflix, what’s important isn’t where a show comes from, or even that it's particularly good. What’s important is that it attracts and retains new users in 190 countries.
“They don’t have to be a boutique prestige producer or a studio that can appeal to the masses,” says Holt. “They can be anything, and everything.” By tripling the number of markets in which it operates, Netflix will also dramatically broaden the range of people to which its originals must appeal. It also, though, gives each of those originals a better chance to captivate a crowd. “Even Netflix’s less popular original shows have had decent audiences, but the good news is that Netflix now has a massive global audience to show this programming to,” says Dawson. “even the less popular stuff can still receive a fairly significant audience across all those 190 countries put together.” All of which could result in some exciting cross-pollination—will streaming help Bollywood finally find a US fanbase?—or it could mean more racially charged train wrecks from '90s comedians. Or both. Only time, and the data, will tell.
But mostly the data.
The rapid globalization of Netflix will also have effects that you won’t necessarily see. Whereas a year ago the company touted its fancy-TV, HDR preparedness, this year Hunt focused on the other end of the spectrum. “We’re beginning to focus our attention much more on mobile,” says Hunt, which is how customers in markets like India predominantly consume his product. “You’ll see more innovation in the UI on mobile going forward.” Additionally, while most next-generation video codecs focus on 4K streaming, Hunt says Netflix has also prioritized a video format that helps with efficiency “at the low end.” He sees a viable low-resolution, low-bandwidth solution as still two or three years away, but it’s a serious enough goal that last fall Netflix formed the Alliance for Open Media, along with industry heavyweights like Google, Intel, Microsoft, and streaming rival Amazon.
Netflix’s truly international expansion, then, has had the same impact on its tech as it has on its interface and its content: a little bit of everything, informed by the needs of everyone.
Pleasing all of the people all of the time has never been a viable content model. Then again, no one’s ever had near-infinite chances to get it right.
Executive Editor, News X Topics Netflix Will Knight Will Knight Will Knight Susan D'Agostino Caitlin Harrington Niamh Rowe Amanda Hoover Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
726 | 2,014 | "Microsoft Supercharges Bing Search With Programmable Chips | WIRED" | "https://www.wired.com/2014/06/microsoft-fpga" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business Microsoft Supercharges Bing Search With Programmable Chips Microsoft Save this story Save Save this story Save Doug Burger called it Project Catapult.
Burger works inside Microsoft Research--the group where the tech giant explores blue-sky ideas--and in November 2012, he pitched a radical new concept to Qi Lu, the man who oversees Microsoft's Bing web search engine. He wanted to completely change the machines that make Bing run, arming them with a new kind of computer processor.
Doug Burger.
Microsoft Like Google and every other web giant, Microsoft runs its web services atop thousands of computer servers packed into warehouse-sized data centers, and most of these machines are equipped with ordinary processors from Intel, the world's largest chip maker. But when he sat down with Lu, Burger said he wanted millions of dollars to build rack after rack of computer servers that used what are called field-programmable arrays, or FPGAs, processors that Microsoft could modify specifically for use with its own software. He said that these chips--built by a company called Altera--could not only speed up Bing searches, but also change the way Microsoft run all sorts of other online services.
Despite the cost, and the riskiness of the proposition, Lu liked the idea. In a first for Microsoft, he approved a 1,600-server pilot-system to test out Burger's ideas, and now, he has given the green light to actually move these FPGAs into Microsoft's live data centers. This is set to happen early next year. That means that a few months from now, when you do a Bing search, there's a decent chance that it will be carried out by one of Burger's servers.
The move is part of a larger effort to fix what is an increasingly worrisome problem for big web companies like Microsoft, Google, and Facebook. After decades of regular performance boosts, chips are no longer improving at the same rate they once were. As their web services continue to grow, these companies are looking for new ways of improving the speed and efficiency of their already massive operations. Facebook is exploring the use of low-power ARM processors. According to reports, Google is too. And now Microsoft is about to roll out FPGAs. "There are large challenges in scaling the performance of software now," says Burger. "The question is: 'What's next?' We took a bet on programmable hardware." >"There are large challenges in scaling the performance of software now," says Burger."The question is: 'What's next?' We took a bet on programmable hardware." FPGAs, like the Altera chips that Microsoft used in its pilot project, have been around for years. A decade ago, they were widely used by chip designers as a low-cost way to prototype their new products. But lately, they've crept into networking gear, complex computer rigs that run the bitcoin digital currency, and even some specialized systems used by Wall Street firms to do data analysis. They give hardware makers more freedom to customize their gear.
Using FPGAs, Microsoft engineers are building a kind of super-search machine network they call Catapult. It's comprised of 1,632 servers, each one with an Intel Xeon processor and a daughter card that contains the Altera FPGA chip, linked to the Catapault network. The system takes search queries coming from Bing and offloads a lot of the work to the FPGAs, which are custom-programmed for the heavy computational work needed to figure out which webpages results should be displayed in which order. Because Microsoft's search algorithms require such a mammoth amount of processing, Catapult can bundle the FPGAs into mini-networks of eight chips.
Microsoft The FPGAs are 40 times faster than a CPU at processing Bing's custom algorithms, Burger says. That doesn't mean Bing will be 40 times faster--some of the work is still done by those Xeon CPUs--but Microsoft believes the overall system will be twice as fast as Bing's existing system. Ultimately, this means Microsoft can operate a much greener data center. "Right off the bat we can chop the number of servers that we use in half," Burger says.
What's more, Microsoft can update the chips in much the same way it updates Bing's system software, and Burger and his team can modify the logic on their processors to address bugs and changes in the Bing search algorithm. They do this by building a binary file that represents the updated chip logic and distributing it though Microsoft's standard server management software, called Autopilot. It's not uncommon to have several chip updates per week, Burger says.
Of course, there have been challenges. There was a lab flood and a fire with one of their Taiwanese parts suppliers, and as it stands, Microsoft server monitoring tools didn't always know what to make of chips that are suddenly dropping offline and restarting with reconfigured logic. But Microsoft is confident that the new FPGAs can be used across the company's online empire. "If all we were doing was improving Bing, I probably wouldn't get clearance from my boss to spend this kind of money on a project like this," says Peter Lee, the head of Microsoft Research. "The Catapult architecture is really much more general-purpose, and the kinds of workloads that Doug is envisioning that can be dramatically accelerated by this are much more wide-ranging." It's also the kind of work that's likely to be emulated at other big web companies who have the resources to hire hardware developers, says James Larus, dean of the School of Computer and Communications Sciences with the École Polytechnique Fédérale de Lausanne. He previously worked at Microsoft on Project Catapult. "The benefits of hardware specialization are far too large for the right application for these companies to pass up the opportunity," he says.
According to Burger, developing a whole new chip architecture for one of the world's largest data center operators is the kind of thing that Microsoft Research does pretty well. "Let's jump way out, think of something a little crazy, and then push on it and see how well that works," he says. Come 2015, you can get the answer that question simply by searching Bing.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics Bing data Enterprise microchips Microsoft Search Will Knight Kari McMahon Amit Katwala Andy Greenberg Joel Khalili Khari Johnson David Gilbert Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
727 | 2,014 | "Microsoft's Most Clever Critic Is Now Building Its New Empire | WIRED" | "https://www.wired.com/2014/05/mark-russinovich" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Microsoft's Most Clever Critic Is Now Building Its New Empire Mark Russinovich, inside Microsoft's Redmond, Washington headquarters.
Photo: Mike Kane/WIRED Save this story Save Save this story Save Before joining Microsoft and becoming one of its most important software engineers, Mark Russinovich was in the business of pissing the company off.
This was the late 1990s, when Microsoft dominated the tech world, its Windows operating systems running so many of the world’s computers, from desktops and laptops to corporate workstations and servers. During the day, Russinovich built software for a tiny New Hampshire software company, but he spent his evenings and weekends looking for bugs, flaws, and secrets buried inside Microsoft’s newest and most important operating system, Windows NT. Sharing his findings with the press or posting them to the web, he frequently pissed off Microsoft, but never so completely as the time he exposed Windows NT as a fraud.
Windows NT represented Microsoft’s future–its core code would underpin the company’s operating systems for years to come–and at the time, it was sold in two flavors. One was for corporate workstations used by engineers, graphic designers, and the like, and the other was for servers. NT Workstation was much cheaper, but, unlike NT Server, it barred you from running web serving software, the software that delivers websites to people across the internet. Microsoft said that NT Workstation just wasn’t suited to the task. But then Russinovich reverse-engineered the two OSes and showed that the truth was something very different. NT Workstation, he revealed, was practically identical to NT Server. It wasn’t that the OS couldn’t run web serving software. Microsoft just didn’t want it to.
The story shows that Microsoft is capable of change–however long that change might take.
The ruse was typical of the software giant, a way of artificially shifting a market in its own favor. It could force all web serving onto a more expensive OS while still selling a cheaper version for other tasks. And after Russinovich exposed the practice, releasing a tool that let anyone transform NT Workstation into NT Server , the company responded in typical fashion. Days later, when employees from his New Hampshire company flew across the country to participate in a Microsoft event, Microsoft barred them from the building. But at the same time, the incident managed to bring Russinovich closer to the software giant. Even as his colleagues were shut out of the company, the head of Windows offered him a job.
Told by the six-foot, five-and-a-half-inch Russinovich in his wonderfully straightforward way, it’s a tale that lays bare the unapologetically ruthless attitude that pervaded Microsoft in the ’90s and on into the aughts, an attitude that brought it enormous success but also landed the company in hot water with regulators and ultimately hampered its ability to compete in the more open and collaborative world of the modern internet. But the postscript to the tale–where Jim Alchin, the head of Windows, tries to hire Russinovich–also shows that Microsoft is more complicated than you might expect, that the company is capable of change, however long that change might take.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When Alchin offered him the job, Russinovich didn’t take it. But after several more years spent running his Sysinternals site–where he published a steady stream of exposés that, in his words, “pissed off” Microsoft and other tech outfits–he did join the software giant. The company made him a Microsoft Technical Fellow–one of the highest honors it can bestow–and today, he’s one of the principal architects of Microsoft Azure, the cloud computing service that’s leading the company’s push into the modern world.
Russinovich is a symbol for a new Microsoft, a Microsoft that’s systematically changing its old ways.
Mirroring the company’s technical evolution, he began his career in computer operating systems and has now moved into the cloud. But, more than that, he embodies a new Microsoft attitude. Russinovich has a long history with Microsoft–so he understands the old attitudes and how some of them can still help the company—but, like recently appointed CEO Satya Nadella , he also sees where the company has gone wrong and where it must now travel in order to compete in a world shaped by the Googles, the Facebooks, and the Amazons.
‘I feel that, more and more, Microsoft is embodying the values I’ve always had.’ Today’s Microsoft, he says, is closer to what he wants it to be. “I feel that, more and more, Microsoft is embodying the values I’ve always had,” Russinovich told us last month at Microsoft’s annual Build conference in San Francisco, a conference where the company open sourced its most important software development tools , freely sharing them with the world at large–the sort of thing it never would have done in years past.
Even in small ways, Russinovich belies the Microsoft stereotype. As those inside the company will tell you, he’s unafraid to speak his own mind–something you see not only when he tells the story of his Windows NT exposé, but when he looks back on the NSA spying scandal and its effect on Microsoft.
“He’s an independent thinker,” says Rich Neves, who has worked with Russinovich both inside IBM’s research operation and at Microsoft. “He has what you call intellectual honesty.” And as science fiction fans will tell you, he’s more than just a corporate software engineer. He’s the author of three techno-thrillers — Zero Day , Trojan Horse , and Rogue Code –Michael Crichton-esque novels recently optioned by an independent film producer. But he’s also someone who’s actively pushing Microsoft’s into new places, most notably with Azure.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Azure didn’t begin with Russinovich. But, along with Nadella, he’s one of the primary thinkers who pulled the cloud service out of the old Microsoft mindset and turned it into something that can compete for the future. “He has real vision,” says HP cloud chief Bill Hilf, who once worked alongside Russinovich at Microsoft. “And he knows how to listen to customers.” Russinovich in the basement of his home, next to a cardboard cutout that promoted his cyber-security work.
Photo: Mike Kane/WIRED Cloud computing was invented by Amazon. In the mid-aughts, the web giant unleashed services that let anyone rent computing power over the internet, without setting up their own computer servers, and this sparked a revolution in the way companies built and ran their websites and other software applications. Netflix built its TV and movie business atop the Amazon cloud. Dropbox erected its file-sharing operation there.
‘I ranted at some of the architects when I was at Microsoft. They were constraining the sorts of things you could do.’ The Amazon cloud was a threat not only to server makers like HP and Dell, but also to Microsoft, which had traditionally made so much money selling operating system software to these server makers. So, in the wake of Amazon’s success, Microsoft built its own cloud service. Led by people like Dave Cutler, the man who oversaw the creation of Windows NT, the company built Azure.
The trouble was that, unlike the Amazon cloud, which let you build software however you liked, Azure forced you to build it in a particular way, and this was revolved around Microsoft’s own software development tools. For Chris Brown, who helped build the Amazon cloud and later worked at Microsoft, it was a product of the company’s outmoded way of thinking.
“I ranted at some of the architects when I was at Microsoft. They were constraining the sorts of things you could do,” Brown told us in 2012. “Microsoft likes to do a really big up-front design, where they define the physics of a new universe. They birth this new universe, and they say: ‘This is how you do it’–instead of starting out with something simple and letting people show them how it should be done.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But as Azure struggled to find an audience–and other cloud companies continued to threaten its place in the world–Microsoft slowly let go of this mentality. Under Nadella, the Azure team expanded the system, creating a new service that could run almost anything –including Linux, the massively popular open source operating system that Microsoft once fought so hard to squeeze out of the market. One of the primary architects of this new service was Mark Russinovich.
If you passed him on the street, you wouldn’t peg him as a computer engineer–he looks more like Jon Hamm than Dennis Ritchie–but he has never been anything else. Russinovich completed a computer engineering master’s degree from the Rensselaer Polytechnic Institute in New York and a PhD at Carnegie Mellon, before doing a post-doc at the University of Oregon. He specialized in the design of computer operating systems. Typically, he studied the code at the heart of UNIX, the seminal OS that still underpins Linux and Google’s Android and Apple’s Mac OS, but at Oregon, he moved into Windows.
From the perspective of the tightly-controlled corporate giant that Microsoft had become, Russinovich was still a loose cannon.
Together with another grad student named Bryce Cogswell, he used a federal research grant to explore ways of dealing with crashes and other failures in Microsoft Windows 3.1, the prevailing desktop operating system of the day. After leaving the university, the two moved to separate cities–Cogswell launching a startup in Austin, Texas and Russinovich joining that tiny software company in New Hampshire–but their graduate work soon spawned a sideline business they called Winternals.
Basically, they built new tools for using and hacking Windows NT and other core Microsoft software. Then, as a way of calling attention to these tools–and, in the broader scheme of things, keeping Microsoft honest–Russinovich would reverse-engineer the OS, pinpointing flaws and sharing them with the world through his Sysinternals site and trade pubs like PC Week.
In some cases, he even went straight to Microsoft. One of the highlights of his career, he says, was the time he found a bug in the way the OS juggled multiple computing tasks, or threads, and he emailed it to Dave Cutler, the father of NT. Culter responded with one word: “Thanks.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At the time, Russinovich saw this as a seismic event. He was helping one of his heroes–and this hero was acknowledging his help. But over the years, after Alchin first offered him a job in the wake of the NT exposé, Russinovich became more of a peer to his Microsoft heroes, getting to know people like Cutler and Mark Lucovsky , the man who was so important to Microsoft that Steve Ballmer threw a chair when he left for Google. By the early aughts, Russinovich was helping Microsoft watcher Dave Solomon write a book on the Windows 2000 operating system, and he made regular visits to the company’s headquarters in Redmond, Washington to gather string for the project, growing even closer to the company. 1 Mark Russinovich at work in his home office.
Photo: Mike Kane/WIRED From the perspective of the tightly-controlled corporate giant that Microsoft had become, Russinovich was still a loose cannon, someone who went after not only Microsoft’s practices but those of countless other tech companies. In 1995, he revealed that a tool called SoftRAM, which promised to expand your computer’s memory, didn’t really do so , and the FTC forced a recall. A decade later, he discovered that Sony was installing what amounted to spyware on people’s PCs , and following another government investigation–and several lawsuits–Sony ended up paying out too.
Nonetheless, in the wake of the Sony scandal, Microsoft offered Russinovich another job. This time he took it, but only after the software giant agreed to buy Sysinternals and keep it going. To negotiate the deal, Russinovich hired Microsoft’s former head of mergers and acquisitions, who had only just left the company. “He knew the playbook,” Russinovich says, “which made things easier.” So, on the side, Russinovich continued to do what he had always done, but now his main task was to hone the code at the heart of Windows, helping to shape new OSes such as Windows Vista and Windows 7. He did this for the next four years, and then, encouraged by Cutler and Microsoft chief technology officer Ray Ozzie, he moved to Azure.
An operating system runs on a single machine and a cloud service runs across thousands, but the two behave in much the same way. They allow a collection of interconnected hardware to work as a whole. That’s one reason that Russinovich, someone who worked for so long in the guts of the operating system, is so well suited to building a sweeping cloud service like Azure. “Many of the same architectural principals apply,” he says.
The bigger difference is that Microsoft isn’t in a position to tightly control the way people use a cloud service. The competitive landscape has changed. Many businesses are still reluctant to move into the cloud—for security, regulatory, and other reasons–and if they do move, it’s too easy for them to choose another service: an Amazon or a Google or a Rackspace. But that’s another reason Russinovich is suited to the job.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg ‘Look at what you’re doing through the eyes of the customer, treat the customer with respect, and assume the customer is smart.’ The company’s decision to refashion Azure as a service where businesses could run practically any software, including Linux, says Russinovich, was a direct response to discussions with longtime Microsoft customers. They wanted a way to move their existing software into the cloud, rather than just building new applications to suit Azure’s very specific architecture. “We needed to give them an on-ramp,” he says, and that’s what he helped design. It’s this kind of simple customer interaction, Russinovich explains, that shows how Microsoft is now aligning with his personal values. “It’s really just following some basics that can get lost in the heat of the drive to grab revenue and maximize profits: look at what you’re doing through the eyes of the customer, treat the customer with respect, and assume the customer is smart,” he says.
Judging from the recent growth in Microsoft’s Azure business, the move has paid off, despite increased competition from Google and Amazon. A Microsoft that runs Linux is a better Microsoft. But for Russinovich, this is merely a first step. The irony is that he believes the world will eventually embrace something that’s a lot more like Azure’s original architecture.
Known as a “platform-as-a-service”–as opposed to an Amazon-like “infrastructure-as-a-service”–the original architecture tightly controlled how software was built, but it also ensured that businesses didn’t have to deal with many of the hassles that typically come with running large software applications, like spreading the software over more machines to accommodate more traffic. The platform-as-a-service handles that for you. This, Russinovich says, should be the ultimate goal.
So he’s now working to merge the platform service and the infrastructure service, giving people the power to run any software while still ensuring this software operates in an automatic way. “We want to blend the two worlds,” he says. Google is moving down a similar road, and, in a way, Amazon is too. Nowadays, Microsoft must compete head-on with rivals, and that’s exactly what it’s doing.
Russinovich walks to a meeting inside Microsoft HQ.
Photo: Mike Kane/WIRED The added wrinkle is that Microsoft is battling more than just the Googles and the Amazons. Like these rivals, it’s battling widespread concerns that the cloud is less secure than systems that run inside your own data center–concerns that only heightened when ex-government contractor Edward Snowden revealed that the NSA was spying wholesale on the internet’s largest services. But this is another area where Russinovich lends a hand.
He is a novelist who has lived what he writes about.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In running Sysinternals, Russinovich became one of the world’s preeminent security researchers. As he reverse engineered Windows and other software, one of his primary aims was to locate bugs and other vulnerabilities that could expose the system to miscreants–the Sony rootkit being a prime example. It was this work that inspired his first novel, Zero Day , which takes the idea of a cyber attack to its logical extreme, describing a world where Arab terrorists let loose a virus on everything from airplanes and ships to hospitals and nuclear power plants. He was feeding off a lifelong love of techno-thrillers–something that began when he picked up Michael Crichton’s The Andromeda Strain as a kid–but unlike many science fiction writers, he has lived much of what he writes about.
This is why, when the NSA story broke, Russinovich was part of the small team that worked to remake Microsoft’s online security. Much like Google, the company started encrypting all information moving between its data centers and laid down a new set of cryptography schemes. Many still question how effective these schemes will be, accusing the company of secreting collaborating with the NSA and other government organizations to share data through other channels, but Russinovich is quick to say this can’t happen in today’s world. “The risk to the business is monumental,” he says. “Without trust, there is no cloud. You’re asking customers to give you their data to manage, and if they don’t trust you, there’s no way they’re going to give it to you. You can screw up trust really easily. You can screw it up just by showing incompetence. But if you show intentional undermining of trust, your business is done.” It’s the type of thing Microsoft might have said in the past. But this time, the words ring at least a little differently. They’re coming from Mark Russinovich, and he isn’t what Microsoft used to be.
1 Correction 13:40 EST 05/27/14: This story originally indicated that Dave Solomon worked for Microsoft. He was self-employed.
Senior Writer X Topics Amazon Cloud Computing Enterprise Google Microsoft Windows Paresh Dave Will Knight Will Knight Niamh Rowe Amanda Hoover Paresh Dave Steven Levy Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
728 | 2,010 | "Satya Nadella's Got a Plan to Make You Care About Microsoft. The First Step? Holograms | WIRED" | "https://www.wired.com/2015/01/microsoft-nadella" | "subscribe SCROLL DOWN Satya Nadella’s Got a Plan to Make You Care About Microsoft. The First Step? Holograms By Jessi Hempel O n a campus notable for tight security and secret offices, Building 92 is a rare beacon of openness. Guests can enter without a Microsoft ID and browse corporate history in the visitor center or pop into the company store for branded water bottles, onesies, and my mom is a geek T-shirts. And yet, directly beneath them, tucked away in the basement, there is a lab so confidential that even most employees have never heard of it. Alex Kipman flashes his badge across the access pads to a set of double doors and goes bounding down the stairs.
Over the past five years, Kipman and a team of Microsoft engineers, designers, and researchers have toiled in this windowless space to create a top-secret product that might be the company’s most ambitious since the 2010 release of the motion-sensing gaming device Kinect: an augmented reality headset codenamed Project HoloLens. The device—a kind of face-computer that looks like a pair of space-age sunglasses—is a bit like the Oculus Rift virtual reality headset. But while the Rift immerses its wearers in a completely digital environment, Project HoloLens weaves digital elements into the real world—a magical merging of the virtual and physical.
Over the next couple of hours, I play a game where a character jumps around a real room, collecting coins sprinkled atop a sofa and bouncing off springs placed on the floor. I sculpt a virtual toy (a fluorescent-green snowman) that I can then produce with a 3-D printer. I collaborate with a motorcycle designer Skyping in from Spain to paint a three-dimensional fender atop a physical prototype. I traverse the surface of Mars with a NASA scientist.
Satya Nadella, December 2014.
Platon But it’s a much more mundane task that really gives me a sense of Project HoloLens’ potential: fixing a light switch. Kipman places the headset on me, and points me toward a 3-inch-wide hole in the wall with wires jutting out of it and a nearby sideboard topped with unfamiliar tools. (As is perhaps obvious, I’m no electrician.) An engineer pops up on my screen, Skyping in from another room, and introduces himself. He can see exactly what I’m seeing. He draws a holographic circle around a voltage tester atop the sideboard. Then he walks me through the process of installing the switch, coaching me and sketching quick holographic arrows and diagrams that glow on the wall in front of me. Five minutes later, I flip a switch and the living room light turns on.
Project HoloLens is extremely ambitious, and it’s the first major test of whether Microsoft’s new CEO, Satya Nadella, can restore the company’s long-dormant reputation for innovation and creativity. Nadella, 48, brings a fresh leadership style to the job, pairing the institutional knowledge he acquired over more than two decades at Microsoft with a collaborative, nice-guy approach to management. “He has allowed ideas to bloom and be considered,” says Terry Myerson, the executive in charge of Windows, who has been with the company since 1997. “That’s hard to do with big groups of people.” Project HoloLens chief inventor Alex Kipman.
Platon When Microsoft was founded, its ambitious mission to power a personal computer on every desk in every home seemed as radical as Project HoloLens does today. But 40 years later, the going perception in Silicon Valley is that the company’s best days are behind it. In a public conversation with Marc Andreessen in October, investor Peter Thiel called Microsoft a bet “against technological innovation.” Though Microsoft makes a lot of money—sales revenue jumped almost 12 percent to $86 billion last year—its core business is declining, a dynamic that was set in motion more than a decade ago, when nearly every enterprise owned and ran Windows-powered PCs and servers.
Microsoft’s fall stems from its attempts to lock users into its products by refusing to work with competitors. Lost in the hubris that can come with market dominance, the company launched a series of me-too hardware products, figuring loyalists would embrace them. There was the Zune MP3 player that followed the iPod, the Surface tablet that replicated the iPad, and the Kin, a much-hyped 2010 phone designed for social networking that was on sale for just 48 days before Microsoft and Verizon killed it. Consumers turned to better-designed devices that were plugged into other software ecosystems where Microsoft had no stake, rendering the company irrelevant. Meanwhile, the computing industry was changing. Processing increasingly happened in the cloud, and businesses rented the software they used. Users began to shift more of their work to mobile devices, most of them powered by Apple’s iOS and Google’s Android. In 2014, Microsoft accounted for an estimated 4 percent of the global market share for smartphone operating systems. In the “mobile first, cloud first” world that Nadella is fond of referencing, Microsoft missed mobile and came late to the cloud.
Want more WIRED? Subscribe now to get 6 months for $5 This was the state of affairs that Nadella faced when he took the top job a year ago. In an early analyst call, he quoted philosopher Friedrich Nietzsche, saying Microsoft must have “courage in the face of reality.” Since then, he has been doing just that. His predecessor, Steve Ballmer, described Microsoft as a devices and services company. Nadella has scrapped that, casting it instead as a company capable of working across any platform—even those controlled by competitors—to help people be more productive. He has made Office software available on Apple- and Google-powered tablets and phones and made Windows free to manufacturers of devices smaller than 9 inches. He has forged new partnerships with companies Microsoft once considered enemies and spent time with startups to learn how innovative business models work. And he paid $2.5 billion for Minecraft maker Mojang, so that a new generation will grow up on Microsoft’s software.
But Project HoloLens is by far the boldest—and riskiest—move of the Nadella era. It’s not another me-too product but a truly unique experience. It’s also the kind of project that few besides Microsoft would undertake—a lavish, multiyear effort that builds on lots of in-house research, all in service of extending the reach of Windows. Nadella believes that Project HoloLens is nothing less than the emergence of the next computing interface, saying, “It’s like the first time you used Excel on a PC with a mouse and a keyboard” (a transformative experience that perhaps only a longtime Microsoft exec can cite). More significant for Nadella, even though Project HoloLens has been in the works since long before he ascended to CEO, it will define the first years of his tenure—heralding either a new era of innovation at Microsoft or another regrettable chapter in the story of a company in decline.
S atya Nadella is drawing me a picture of Microsoft. We’re in his office, the same simple square of Building 34 previously occupied by Ballmer, and before him Bill Gates, and the furniture doesn’t seem to have changed much—the same Ikea shelves line the walls. Still, there are some indications that signal the arrival of a new occupant. For one thing, there’s an empty iPad mini box lying open on his desk. A cricket bat rests right next to it.
WIPEOUTS Microsoft has a long history of flops. Here’s a sampling.
—Jason Kehe 1996 Windows CE 2004 SPOT smartwatch 2006 Zune MP3 player 2007 Windows Vista 2007 Mediaroom entertainment system 2010 Kin phone 2010 Docs.com Nadella squats in front of a dinged-up black laminate coffee table and sketches three concentric circles on a piece of scratch paper. A wiry man with a shaved head and black-framed eyeglasses, he has a voice with a range of octaves but only one moderate volume. The outer ring, he explains, is Concepts—the vision that allows the company to think up new things like Project HoloLens. Inside this, he labels the second circle Capabilities—the engineering and design skills necessary to make things. Nadella pauses on the smallest circle, the center of the bull’s-eye, which he labels Culture. “You need a culture that is fundamentally not opposed to new concepts and new capabilities,” he says.
Executive vice president Qi Lu.
Andrew Hetherington Microsoft has had no problem with the outer circles. It has combined vision with breathtaking engineering to create a whole bunch of amazing prototypes. But they rarely make it to market. That’s because, over the past two decades, its culture has grown competitive and insular, more consumed with getting and protecting an edge than pushing into riskier new businesses. People were motivated to produce things they knew their managers would like, rather than take risks on new ideas that might fail. The company’s money-minting core offerings, Windows and Office, sucked up talent and attention while newer ideas got overlooked. Under former CEO Ballmer, employees were expected to live the Microsoft lifestyle, using Windows-powered phones and Surface tablets even when the bulk of the innovation was happening on iOS and Android devices. Says one veteran, “I think there’s a lot of people that really felt, you know, maybe like Detroit does. You drive American.” It has become accepted wisdom in Silicon Valley that large, successful tech companies can’t reinvent themselves. Many have attempted to engineer comebacks, and the industry is teeming with failed empires that have evolved into middling businesses on the decline: BlackBerry. Hewlett-Packard. Yahoo. But Nadella finds inspiration in an example closer to home: Microsoft itself. He remembers sitting in a waiting room at Goldman Sachs in 1992, trying to meet an underling of the CIO. He never did get in, because Goldman Sachs, like everyone else at the time, thought of Microsoft as a company that only sold software for your home PC. “They wouldn’t even bother to see Microsoft people, saying, ‘What the heck do PC people have to do with us?’” Nadella recalls. He lets a pause settle between us, prompting me to reflect on the rise of enterprise computing, which enabled Microsoft to embed its Office software in practically every business in the world. “And so things change,” he concludes.
Redmond Missed Mobile The old Microsoft never gained turf in smartphone operating systems. As users migrated to mobile globally, the company was quickly overtaken by Google and Apple. —J.K.
While Nadella developed much of his management approach on the job, his clarity of vision and empathetic listening style trace their roots to a formative personal event. He’d arrived at Microsoft in 1992 with a master’s degree in computer science from the University of Wisconsin-Milwaukee (he’d later earn an MBA from the University of Chicago). He was newly married to a woman he’d met in high school back in Hyderabad, India, where they grew up. For the first few years at Microsoft, Nadella was on the fast track, progressing quickly through the ranks. Then Zain, the first of his three children, was born profoundly disabled. The reality that Zain would be confined to a wheelchair set in. Initially Nadella asked himself, “Why us? Why did this happen to us?” But after a couple of years, his perspective shifted. “We realized this has nothing to do about us and everything to do about him,” he says. His son’s condition helped him see beyond himself and compelled him to force-rank his daily priorities so that he could meet his son’s needs and still perform his job—the same skill so necessary to effective management. The experience held other lessons too. “I think back to how I thought about work before and after, and this notion of the words you say and what they can do to the other person,” he says, referring to his interactions with his wife and son. “How can you really change the energy around you? It’s a thing that started building in me, and I started exercising it in my day job. It made a lot of difference to how I felt when I went back home. So much of it is mental attitude.” He keeps a black-and-white headshot of Zain beneath his monitor, his son’s head thrown back, laughing.
Long before he was CEO, this approach helped Nadella start to build a new culture amid seemingly immutable circumstances without making enemies. For example, when he wanted to start a cloud-computing business—which would mean borrowing technology from the search engine Bing—he ran up against Microsoft’s powerful SQL server business. Normally the SQL team would have instantly squashed Nadella’s initiative. But Nadella persisted, eventually convincing the server group that the rise of cloud computing was inevitable. “He won that battle,” says James Staten, an analyst with Forrester who has covered Microsoft for more than a decade. “It was a huge political shift.” NASA JPL scientist Jeff Norris.
Andrew Hetherington As CEO, Nadella has restructured Microsoft to function more like the Silicon Valley companies that have eclipsed it. To be fair, it’s a process Ballmer put in motion in the summer of 2013 when he reorganized the company into cross-functional teams, abolishing the powerful product divisions for a flatter, more integrated approach he called One Microsoft. Nadella has streamlined these teams, cutting 14 percent of the staff. He has eschewed Microsoft’s traditional R&D cycle, in which products went through a testing phase after development, in favor of a fast-moving process in which these steps happen in tandem. To foster experimentation, the company opened Garage—a 32-chapter group of in-house tinkerers—to the public so outsiders could test Microsoft’s ideas.
Nadella’s new philosophy extends to the org chart, where he’s empowering his executives to work across once-siloed divisions. He has named Julie Larson-Green, formerly the executive vice president in charge of devices like Xbox and Surface, to a new role: chief experience officer. Judging narrowly by the org chart, it was a demotion, since she now reports to Qi Lu, one of Nadella’s deputies and head of the Applications and Services Group, instead of directly to Nadella. But in many ways, it’s a bold experiment that puts Larson-Green at the forefront of Nadella’s new approach to development. Larson-Green now determines how Microsoft’s products, from Xbox to Office, can better support one another and also perhaps work with other companies’ popular apps and services. This shift, which sets an important precedent for other Microsoft employees, seems to have worked because Nadella, Lu, and Larson-Green share common goals and, they say, a trust that runs deep enough to allow for a flatter hierarchy. (Indeed, earlier in his career, Nadella actually reported to Lu.) To motivate people, Nadella asked Bill Gates to spend 30 percent of his time as technical adviser to the company. Nadella sees the moral authority of the founder as a critical management tool. “When I say, ‘Hey, I want you to go run this by Bill,’ I know they’re going to do their best job prepping for it,” he says. Gates is not a regular at management meetings, however. He interacts primarily with senior staff, including Qi Lu, offering feedback on Microsoft’s technical work.
Nadella also revised Microsoft’s approach to research and development. The company has long spent upwards of 11 percent of revenue on this area, and it has had a reputation for investing in the type of blue-sky undertakings that may not see a commercial outlet. Take Microsoft Courier, a 2008 booklet PC with touchscreens that faced each other; the ill-fated device never made it out of the lab. (Its team left afterward and later founded FiftyThree, the design startup behind the iPad app Paper and digital stylus Pencil.) Nadella has pushed researchers to collaborate much more closely with engineers in other departments to help them get products out faster. The release of Microsoft’s Skype Translator, which translates multilingual conversations in real time, is an early success. Nadella calls Skype Translator “a moment of truth” because it required groups of people to work across divisions, combining features from the Skype folks, the Azure cloud-computing team, and the Office teams. That’s the kind of cooperation that never happened in the old Microsoft.
Inside the Headset 1. CAMERA The Project HoloLens depth camera has a field of vision that spans 120 by 120 degrees, far more than the original Kinect, while drawing only a fraction of the power.
2. COMPUTER As many as 18 sensors flood the brain of the device with terabytes of data every second. It handles the onslaught with an onboard CPU, GPU, and first-of-its-kind HPU (holographic processing unit).
3. LENSES To trick your brain into perceiving holographic images at certain make- believe distances, light particles bounce around millions of times in the so-called light engine. Then the photons enter the two lenses (one for each eye), where theyricochetsome more between layers of blue, green, and red glass before finally hitting the back of your eye.
4. VENT The device is more powerful than a laptop but won’t overheat—warm air flows to the sides, where it vents up and out.
Interface GESTURES Engineers are fine-tuning a feature called “hold- ing” that would allow you to grasp and manipulate holographic objects. Opening your hand would take you back to a home screen.
VOICE Microphones in the device capture voice commands.
GAZE Sensors track where the wearer is looking and adjust the display.
Uses HOLOGRAMS The device can project a hologram into a room and keep it locked in position—an essential feature its engineers call “pinning.” Instead of the object moving relative to you, you can move around the object and view it from any angle. In the case of this holographic raptor, that means it’s easy to stay just beyond its scary reach.
VIRTUAL ENVIRONMENTS Project HoloLens can simulate a physical spacelike the surface of Mars, complete with the Curiosity rover. Once inside the environment, scientists can interact with objects and overlay the space with virtual flags. For example: Placing a flag in the distance could, theoretically, tell the real rover to go there and collect a soil sample.
AUGMENTED REALITY The device scans your environment and builds a digital model in real time. Then, if you’re playing a game, a character from the game can frolic as a hologram around your living room. Project HoloLens not only knows the couch is there, it also sees that it’s made of leather—and is much cushier than, say, your wood floor.
P roject Hololens' chief inventor, Alex Kipman, is representative of the Microsoft that Nadella is trying to build. While his official title is technical fellow with the Operating Systems Group, he works collaboratively across disciplines. Nadella appreciates his versatility. “Alex is pretty crazy in the sense that he's not like your classic engineering guy,” he says, drawing a distinction between the predictability of typical engineers and the imaginative quality of their researcher counterparts. “He sort of thinks of engineering as a research project.” Kipman, who was born in Brazil, started young. His parents had to replace his Atari 2600 twice because he kept breaking it to figure out how it worked. He landed at Microsoft after graduating from Rochester Institute of Technology, and by the end of 2007 he'd dreamed up Kinect, the motion-sensing accessory for the Xbox. “When I pitched Kinect to the company, it wasn't Kinect. It was this vision,” he told me, holding up an early prototype for Project HoloLens. “Kinect was the first step.” Kipman believes Project HoloLens will be to this phase of computing what the PC was to the last: the latchkey to a completely transformed world. In this new reality, sensors will be everywhere, producing copious amounts of data, a layer of ambient intelligence coating every physical object. Project HoloLens and its counterparts will offer a visual computing platform controlled by speech and gesture that is so intuitive it fades into the background. “So you and I can do what we're put on earth to do: interact with other humans, environments, or objects,” Kipman says. “With technology helping us do that more, better, faster, and cheaper.” Want more WIRED? Subscribe now to get 6 months for $5 Project HoloLens is built, fittingly enough, around a set of holographic lenses. Each lens has three layers of glass—in blue, green, and red—full of microthin corrugated grooves that diffract light. There are multiple cameras at the front and sides of the device that do everything from head tracking to video capture. And it can see far and wide: The field of view spans 120 degrees by 120 degrees, significantly bigger than that of the Kinect camera. A “light engine” above the lenses projects light into the glasses, where it hits the grating and then volleys between the layers of glass millions of times. That process, along with input from the device's myriad sensors, tricks the eye into perceiving the image as existing in the world beyond the lenses.
The device has just three controls, one to adjust volume, another to adjust the contrast of the hologram, and a power switch. Its speakers rest just above your ears. Project HoloLens can determine the direction from which a sound originates, so that when you hear something, it'll appear to be coming from where it would be in real life. If a truck is meant to be speeding by your left side, for example, that's where you'll hear the sound of its engine. By the time Project HoloLens comes to market toward the end of this year, it'll weigh about 400 grams, or about the same as a high-end bike helmet. Microsoft's new operating system, Windows 10, powers it, so any developer can program for it.
NASA has already gotten an early crack at it. As the mission operations innovation lead at the agency's Jet Propulsion Laboratory, Jeff Norris is charged with rethinking how we explore space, with a focus on the interface between humans and technology. He met Kipman nearly five years ago when he was creating Kinect. In Project HoloLens, Norris saw the potential for technology to help space explorers collaborate more closely and to provide them a quality known as presence. (“People make better decisions when they feel like they're in the environment,” he says.) Last March, Norris and several members of his team relocated from Southern California to Redmond for a few months to build a Mars simulation.
Kipman lets me test-drive it. I slip on the headset and find myself on the parched, dusty surface of the Red Planet. Behind me, the Curiosity rover towers 7 feet tall, its cameras recording the terrain. The illusion is so real my legs begin to quiver, unsure what to make of the disparate information I'm sensing. Norris appears beside me in the Mars-scape, represented as a 3-D golden human-shaped blob. A dotted line extends from his eyes toward what he's looking at. “Check that out,” he says, and I squat down to see a rock shard up close. Project HoloLens allows me to work on a desktop computer while in the demo, something you can't do in the Rift's virtual world. It also makes it possible for me to pin holographic flags on the virtual scenario, and someday this will be able to set in motion real-world actions. With an upward right-hand gesture, I bring up a series of controls. I choose the middle of three options, which drops a flag. When scientists do this, the command could theoretically be transmitted to the actual rover so that the task can be accomplished in real life, on Mars.
The simulation is so effective that NASA plans to deploy it on a mission by this summer. But this is just one example of Project HoloLens' capabilities. The real opportunity for the platform will come from developers committing resources and imagination to it. NASA has already signed on as a launch partner; others will likely follow. But for Project HoloLens to succeed—and for Microsoft to succeed—it has to build platforms that developers want to build software for, as it did with Windows for PCs in the '90s, and as it failed to do with Windows for phones earlier in this decade.
Virtual Reality, Real Money Venture firms have bet more than $1 billion that the next big computing platform will emerge from virtual- and augmented-reality projects. —J.K.
T wo and a half months before Microsoft announced Project HoloLens, I went to London to watch Nadella address European customers and developers at an event called Future Decoded. This is the type of audience he'll have to win over if Project HoloLens and future innovations are to succeed. A swing band performed outside an exhibit hall full of Microsoft demos. More than a thousand people crowded into the nearby auditorium for Nadella's appearance, which was advertised as an “intimate, interactive conversation” with a Microsoft UK executive whose title was “chief envisioning officer.” The duo promised to cover “how Microsoft is creating the next generation of technology innovation.” For all of its promise, the conversation underwhelmed. Nadella kept it to just 15 minutes. His interview was long on catchphrases (“reinventing productivity” and “mobile first, cloud first”) and short on the how. As he exited the stage, a local reporter remarked on his brevity in a tweet: “Whoa! He's gone!” Of course, the point wasn't to say anything new; it was to show face, reinforce Microsoft's message, and rebrand the company and its culture as approachable and forward-thinking.
MEET THE NEW GUY Highlights from Nadella’s busy first year as CEO. —J.K.
February 4 Becomes CEO. Asks founder Bill Gates to spend 30 percent of his time as technology adviser.
February 24 Names Julie Larson-Green chief experience officer.
March 27 Introduces Office for the iPad.
April 2 Makes Windows free to manufacturers of devices smaller than 9 inches.
May 29 Partners with Salesforce, a long-time competitor.
September 15 Buys Minecraft maker Mojang for $2.5 billion.
October 9 Apologizes for comments discouraging women from asking for raises.
October 22 Expands Garage, Microsoft’s in-house idea factory, to let the public chip in.
November 12 Announces that .NET software framework will be open source.
December 15 Releases Skype Translator in beta.
The rebranding challenge, in particular, requires consistent reinforcement, and Nadella has been networking at a feverish pace since he started the job. In London he'd started taking customer meetings shortly after 5 am that day, and his calendar was packed until well after sundown. (As for the reason his remarks were so short, I was told the conference started later than planned.) Since being named CEO, he's kept up a hefty speaking schedule and met with small groups of journalists over dinner to amplify his efforts. He has relied on board chair John Thompson, who ran securities company Symantec for years and is now CEO of the software company Virtual Instruments, to make introductions for him up and down Highway 101. He has spent time meeting with startup founders like Ryan Smith, who runs Utah-based survey software company Qualtrics and who presented to Nadella at the invitation of venture firm Accel. Nadella asked Smith a half-dozen questions, quickly picking up on where Smith placed his strongest engineering talent. Smith was impressed. It was his first time meeting with anyone in Redmond. “Historically, companies have struggled a little bit on how to work with Microsoft,” Smith says. “I mean, where do you start?” The meeting shifted his impression of the company. “This guy's different,” he says of Nadella. “He's humble.” This attitude has helped Nadella forge new partnerships with outfits like Dropbox and Salesforce. The Salesforce partnership is particularly surprising. For a long time, Microsoft had considered the cloud-computing company an enemy, even launching Dynamics CRM, a direct Salesforce competitor. But Nadella realized that many of Salesforce's customers also used Office 365, and he began wondering if the two products might be combined. So last spring he called up CEO Marc Benioff to propose a partnership. In the first half of 2015, Salesforce will be integrated with Office, SharePoint, and OneDrive for Business on Android and iOS. A Salesforce app for Outlook will also become available, and Salesforce apps for Windows phones and Excel will follow. Says Benioff: “Before, we just were not able to partner with Microsoft. Satya has opened a door that was closed. And locked. And barricaded.” The new, warm and fuzzy, more collaborative Microsoft has even embraced open source software, the collective multiauthored approach to writing code that Ballmer once referred to as a cancer. In November, Microsoft opened up its entire .NET framework, its programming infrastructure for building and running applications and services.
This new attitude won't necessarily make developers excited about Project HoloLens. But there is optimism in the air. Soon after Microsoft announced that .NET would be open source, Box CEO Aaron Levie summed up the response in a pithy tweet: “Sometimes it feels like Satya is in one of those '80s teen movies when the parents go out of town. And it's great.” A s mind-blowing as a holographic tutorial is, or even the virtual surface of Mars, Project HoloLens' first killer app is likely to be the popular videogame Minecraft , which Microsoft acquired in September. For a generation of children, Minecraft has become the digital equivalent of Lego blocks, a highly collaborative form of play. Soon imaginative kids might be able to play in 3-D, working alongside holograms of their real-life friends to build things together. The promise of a product like this is central to helping Project HoloLens take off. As Terry Myerson, who runs Windows, told me, “If you want to play holographic Minecraft , the only place to do it is going to be on this.” And you don't have to be an early-adopting Glasshole to want in on holographic Minecraft.
But you will probably have to wait a little while. Microsoft is being very deliberate in how it rolls out Project HoloLens. First, Nadella plans to spark the public imagination by introducing the device to folks it calls “makers”—the people who attend TED conferences and lined up to buy Google Glass—and the oh-so-critical developers. Microsoft plans to distribute lots of development kits this year. Next up will be the commercial partners. Finally, once the platform has critical mass, Microsoft will make it available to everyone, including the Minecraft -obsessed.
The slow rollout is because—in another sign of an attitude shift—Nadella says he wants to see how people react to Project HoloLens, and adjust the product accordingly. In 2007, when Steve Jobs introduced the iPhone, he resisted apps, preferring his customers to access the web through their Safari browsers. But after that approach tanked, in 2008 he released a software development kit for app makers and launched the App Store. In similar fashion, Nadella has defined a strategy for Project HoloLens but says its path will ultimately be determined by the behaviors and preferences of its developers and users.
Chief experience officer Julie Larson-Green.
Platon Companies large and small are pushing to invent the next computing interface—a canvas so critical it will be to smartphones what smartphones were to desktop computers. Facebook has Oculus. Google has Glass. And in Dania Beach, Florida, a stealth startup called Magic Leap has banked $542 million in its latest round of funding to develop something allegedly smarter than all of them. The ones that prevail will do so because developers and customers buy into the dream, sinking time and money into their platforms and causing innovation to flourish.
It will take a while for any of these competitors to succeed, and Kipman suggests that if users and developers take Project HoloLens in another direction, or don't take to it at all, Microsoft will be OK. The real beneficiaries of Project HoloLens will be the company's operating system, Windows 10; its cloud-computing product, Azure; and its suite of software products, Office 365. They'll continue to improve even if Project HoloLens doesn't. What's important is that more people find more ways to use them. What's important is that, as the next new technology platform emerges, whether it's Project HoloLens or not, Microsoft gets there early.
Just when will the next computing interface take hold? I press Nadella on this, but he's not one to predict the future. “What is that quote? I forget now who said this,” he says. “You always overestimate what you can get done in a year and underestimate what you can get done in 10 years.” Later, I look up the quote. He got the gist of it right. And the person who said it was Bill Gates.
This story appears in the February 2015 issue of WIRED.
" |
729 | 2,014 | "You Don't Have to Be Google to Build an Artificial Brain | WIRED" | "https://www.wired.com/2014/09/google-artificial-brain" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business You Don't Have to Be Google to Build an Artificial Brain Getty Save this story Save Save this story Save When Google used 16,000 machines to build a simulated brain that could correctly identify cats in YouTube videos, it signaled a turning point in the art of artificial intelligence.
Applying its massive cluster of computers to an emerging breed of AI algorithm known as "deep learning," the so-called Google brain was twice as accurate as any previous system in recognizing objects pictured in digital images, and it was hailed as another triumph for the mega data centers erected by the kings of the web.
"The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers," The New York Times wrote in 2012, "leading to significant advances in areas as diverse as machine vision and perception, speech recognition, and language translation." Indeed, in the two years since, Microsoft released a Skype service that uses deep learning to instantly translate conversions from one language to another, Facebook hired one of the leading experts in the field to boost image recognition and other tools on its service, and everyone from Twitter to Yahoo snapped up their own deep learning startups.
But in the middle of this revolution, a researcher named Alex Krizhevsky showed that you don't need a massive computer cluster to benefit from this technology's unique ability to "train itself" as it analyzes digital data. As described in a paper published later that same year , he outperformed Google's 16,000-machine cluster with a single computer---at least on one particular image recognition test.
This was a rather expensive computer, equipped with large amounts of memory and two top-of-the-line cards packed with myriad GPUs, a specialized breed of computer chip that allows the machine to behave like many. But it was a single machine nonetheless, and it showed that you didn't need a Google-like computing cluster to exploit the power of deep learning.
>A researcher named Alex Krizhevsky showed that you don't need a massive computer cluster to benefit from deep learning.
Harnessing this AI technology still requires a certain expertise---that's why the giants of the web are buying up all the talent---and thanks to their massive data centers and deep pockets, the Googles of the world can take this technology to places others can't. But many data scientists are now using single machines---ordinary consumer machines built for gaming---to solve their own problems via deep learning algorithms.
At Kaggle, a site where data scientists compete to solve problems on behalf of other businesses and organizations , deep learning has become one of the tools of choice, and according to Kaggle chief scientist Ben Hamner, single machines have been used to tackle everything from analyzing images and speech recognition to chemoinformatics.
For Richard Socher, a Stanford University researcher who has made extensive use of deep learning in systems that recognize natural language, this is another sign that these AI techniques can trickle down to smaller companies. "It's very easy to deploy these kinds of models," Socher says. "Anyone can buy a GPU machine." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At the same time, startups are beginning to build cloud services that offer deep learning tools, and others are rolling out software and consulting services to companies outside the giants of the web. This too can help democratize the technology. "There are only so many companies have datasets the size of Google's and Facebook's and Yahoo's," says Socher, who only used single machines in his own deep learning work. "Other, normal companies have smaller datasets, and they can train models too." GPU is short for graphics processing unit. These chips were originally built to quickly generate graphics and other images on behalf of games and other highly visual applications, but because of their ability to handle a certain kind of math calculation, they're good for all sorts of other tasks. As it turns out, one of these tasks is deep learning.
Deep learning tries to mimic the behavior of neural networks in the human brain. In essence, it creates multi-layered software systems that---if properly configured---can train themselves as they analyze more and more data. Whereas traditional machine learning requires an awful lot of hand-holding from human engineers, deep learning does not.
These multi-layers neural nets involve many computer chips working in parallel---thus Google's 16,000 machines---but you can also handle this kind of parallel processing with GPUs, processors that can be slotted into a single machine in enormous numbers. A top-of-the-line computer graphics card includes more than 2,000 of these processors.
In running deep learning algorithms on a machine with two GPU cards, Alex Krizhevsky could better the performance of 16,000 machines and their primary CPUs, the central chips that drive our computers. The trick involves how the algorithms operate but also that all those GPUs are so close together. Unlike with Google's massive cluster, he didn't have to send large amounts of data across a network.
As it turns out, Krizhevsky now works for Google---he was part of a deep learning startup recently acquired by the company---and Google, like other web giants, is exploring the use of GPUs in its own deep learning work. But as Socher explains, the larger point here is that GPUs provide an onramp to deep learning for much smaller outfits.
At Kaggle, data scientists are using deep learning algorithms on $3,000 gaming machines, which include a single graphics card. Typically, they're working on problems involving image and speech recognition, but the technology can help in other areas as well. The first Kaggle competition won by a deep learning machines involved predicting a biological responses to certain molecules based on their chemical structure. "They trained on a single system," Hamner explains. "We take the same technology that's used for graphics and videos games and apply it to scientific purposes." Certainly, there are cases where a 16,000-system cluster is far more useful---to say the least. The likes of Google and Facebook are analyzing enormously large collections of images and digital sound as they train their systems. But if your datasets are smaller, a single system can still provide a level of artificial intelligence that traditional machine learning systems aren't capable of.
As Socher points out, deep learning involves two stages of computing. There's the training stage--where a system learns to operate by analyzing data---and then there's the stage where you actually out the system to work on a problem. The training stage requires more processing power, but in many cases, he says, you can even train systems on single machines. "It all depends on how fast of a turnaround you want," he says.
The added rub is that, well, the giants of the web are buying up all the deep learning talent, and as Hamner says, this talent is still vital in setting up these neutral nets. "Training a deep neural net is still just as much an art as a science. Many parameters used to train neural networks are based on intuition." That said, many deep learning algorithms are open source, meaning anyone can use them, and various startups, including a San Francisco outfit called Skymind, working to train data scientists in the vagaries of these algorithms. The Googles and the Facebooks are leading the way in this AI revolution, but so many others will follow.
Senior Writer X Topics Enterprise Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
730 | 2,014 | "Man Behind the 'Google Brain' Joins Chinese Search Giant Baidu | WIRED" | "https://www.wired.com/2014/05/andrew-ng-baidu" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Daniela Hernandez Business Man Behind the 'Google Brain' Joins Chinese Search Giant Baidu Andrew Ng.
Photo: Ariel Zambelich/Wired Save this story Save Save this story Save Andrew Ng is the man who helped launch Google's wildly ambitious effort to recreate the human brain with computer hardware and software. And now, he will oversee a similar project at Baidu, often called "the Google of China." Last year, in Cupertino, California, not far from Apple headquarters, Baidu quietly opened a research outpost dedicated to "deep learning"--a subfield of artificial intelligence that seeks to vastly improve computing tasks by mimicking the way the human brain operates--and in the months since, this operation has expanded in significant ways. Today, the Chinese search giant will announce that the lab has graduated to a much larger space in Sunnyvale and that Ng, a Stanford University professor, will oversee a new Baidu artificial intelligence research group that spans this lab and an operation in China.
"Andrew is one of the intellectual leaders in machine learning, and deep learning in particular," says Bruno Olshausen, the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley. "I expect he will continue to lead in this way at Baidu." >'Andrew is one of the intellectual leaders in machine learning' Deep learning--something that seeks to improve everything from natural language processing to voice and image recognition--is a technology that gestated in academia for decades, driven by small group of maverick researchers, including Geoff Hinton of the University of Toronto and Yann LeCun, of NYU. But in recent years, it has quickly spread to the giants of the internet.
Ng, a disciple of Hinton and LeCun, helped launch Google's efforts in this field, with a project called "the Google Brain," and after Google acquired his deep learning company, Hinton now works at least part-time at the search giant. Meanwhile, Facebook recently hired LeCun , and many other big names are exploring this technology, including Microsoft and IBM. Even Netflix is getting into the act.
At Baidu, Ng will run both the company's Sunnyvale lab and an R&D center based in Beijing, which will deal in deep learning and "big data" -- i.e. efforts to analyze large amounts of information. Baidu is set to invest about $300 million in this international project over the next five years.
Ng, who starts in his new job today, is stepping away from the day-to-day operations at Coursera, the online-education startup he co-founded. He will still be involved in some projects at Coursera, he says, and will remain the chairman of the board and the public face of the company. But his main focus will be on building up Baidu's AI chops and its Silicon Valley presence. He'll spend most of his time in Sunnyvale. "I'm really excited about the opportunity to build an international research organization from scratch," Ng says. "I've been super excited about AI for a long time, and this is an opportunity for me to return to that." Since taking a leave of absence from Stanford to start Coursera in 2012, Ng had been splitting his time between running the company and doing AI research. Coursera was growing steadily, having secured another $20 million in funding in November, but Baidu's Kai Yu, a longtime friend of Ng's who helped found the Chinese search company's deep learning labs, urged him to focus on artificial intelligence. "He was doing amazing things in online education, but this is not AI," Yu says.
During his last visit from Beijing last March, Yu approached Ng about joining Baidu. The pair talked several times at a Sheraton in Palo Alto--first over a pool-side breakfast and later that same day at dinner. Yu then introduced him to two of Baidu's vice presidents, Jing Wang and Alex Zheng. Later, Ng would fly to Beijing to meet with Baidu CEO Robin Li. Over a three-hour lunch, the two men mapped out their visions for what Baidu's research arm might look like and the types of problems it would tackle.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The 38-year-old seems a good fit for the company. Like Li, Ng has close ties to both the U.S. and Asia, having grown up in Hong Kong and Singapore. That means he may be in a good position to help merge Baidu's Asia and California operations well. "I am a product of both of these cultures," Ng says. "Diversity leads to great creativity and having some of the best ideas from Beijing and Silicon Valley will allow us to innovate faster and come up with more surprising things." Under Ng's leadership, Baidu will grow its Silicon Valley office to roughly 200 people by the end of 2015, most of whom will be deep-learning researchers and computer systems engineers. The systems geeks will focus on things like building clusters of low-cost graphical processing units--or GPUs-- to crunch through the massive amounts of data that deep learning thrives on. GPUs let data scientists work through billions of calculations more quickly and cheaply than using traditional CPUs.
Google , IBM, and others have also leveraged GPUs for deep learning.
Meanwhile, Baidu's deep-learning researchers will focus on developing algorithms that are better at learning from unlabeled data, through what's called unsupervised learning--a concept Ng, together with Google's Geoff Hinton, has been pushing for years. "Andrew Ng and me believe strongly in unsupervised learning," Hinton told WIRED during a conversation at the Google Plex last summer. "Andrew, in particular, pushed on the idea that if we could just use unsupervised learning, then we could go quite a long way." That's because, right now, AI researchers have to do a lot of hand-holding when teaching computers to identify things like words and images. The true promise of AI will be realized, experts say, when computers can teach themselves--when they're able to absorb and understand data without always being told explicitly what it is. That process, Ng says, is closer to how humans learn and represents a still under-explored avenue for improving AI's capabilities.
Other deep-learning heavy-hitters agree. "We want to have machines that can take advantage of all of the data out there, and that requires better unsupervised learning," says the University of Montreal’s Yoshua Bengio, whose work focuses largely on unsupervised learning. Most of the world's data, you see, is unlabeled, and tagging all of it would be incredibly expensive. Figuring out better ways to get machines learning on their own could improve the economics of AI and lead to better applications for consumers. That's why Ng is joining Baidu.
Topics artificial intelligence Baidu deep learning Enterprise neural networks Will Knight Peter Guest Steven Levy Khari Johnson Gregory Barber Will Knight Steven Levy Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
731 | 2,015 | "Facebook's New AI Can Paint, But Google's Knows How to Party | WIRED" | "https://www.wired.com/2015/06/facebook-googles-fake-brains-spawn-new-visual-reality" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Facebook's New AI Can Paint, But Google's Knows How to Party Facebook Save this story Save Save this story Save Facebook and Google are building enormous neural networks---artificial brains---that can instantly recognize faces, cars, buildings, and other objects in digital photos. But that's not all these brains can do.
They can recognize the spoken word , translate from one language to another , target ads , or teach a robot to screw a cap onto a bottle.
And if you turn these brains upside down, you can teach them not just to recognize images, but create images---in rather intriguing (and sometimes disturbing) ways.
As it revealed on Friday, Facebook is teaching its neural networks to automatically create small images of things like airplanes, automobiles, and animals, and about 40 percent of the time, these images can fool us humans into believing we're looking at reality. "The model can tell the difference between an unnatural image---white noise you'd see on your TV or some sort of abstract art image---and an image that you would take on your camera," says Facebook artificial intelligence researcher Rob Fergus.
"It understands the structure of how images work" (see images above).
Meanwhile, the boffins at Google have taken things to the other extreme, using neural nets to turn real photos into something intriguingly unreal. They're teaching machines to look for familiar patterns in a photo, enhance those patterns, and then repeat the process with the same image. "This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird," Google says in a blog post explaining the project. "This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere." The result is a kind of machine-generated abstract art (see below).
Google Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg On one level, these are party tricks---particularly Google's feedback loop, which evokes hallucinatory flashbacks.
And it should be noted that Facebook's fake images are only 64-by-64 pixels. But on another level, these projects serve as ways of improving neural networks, moving them closer to human-like intelligence. This work, says David Luan, the CEO of a computer vision company called Dextro , "helps better visualize what our networks are actually learning." They're also slightly disturbing---and not just because Google's images feel like a drug trip gone wrong, crossing breeding birds with camels in some cases, or snails with pigs (see below). More than this, they hint at a world where we don't realize when machines are controlling what we see and hear, where the real is indistinguishable from the unreal.
Google Working alongside a PhD student at New York University's Courant Institute of Mathematical Sciences , Fergus and two other Facebook researchers revealed their "generative image model" work on Friday with a paper published to research repository arXiv.org.
This system uses not one but two neural networks, pitting the pair against each other. One network is built to recognize natural images, and the other does its best to fool the first.
Yann LeCun, who heads Facebook's 18-month-old AI lab , calls this adversarial training. "They play against each other," he says of the two networks. "One is trying to fool the other. And the other is trying to detect when it is being fooled." The result is a system that produces pretty realistic images.
According to LeCun and Fergus, this kind of thing could help restore real photos that have degraded in some way. "You can bring an image back to the space of natural images," Fergus says. But the larger point, they add, is that the system takes another step towards what's called "unsupervised machine learning." In other words, it can help machines learn without human researchers providing explicit guidance along the way.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Eventually, LeCun says, you can use this model to train an image recognition system using a set of example images that are "unlabeled"---meaning no human has gone through and tagged them with text that identifies what's in them. "Machines can learn the structure of an image without being told what's in the image," he says.
Luan points out that the current system still requires some supervision. But he calls Facebook's paper "neat work," and like the work being done at Google, he believes, it can help us understand how neural networks behave.
Neural networks of the kind created by Facebook and Google span many "layers" of artificial neurons, each working in concert. Though these neurons perform certain tasks remarkably well, we don't quite understand why. "One of the challenges of neural networks is understanding what exactly goes on at each layer," Google says in its blog post (the company declined to discuss its image generation work further).
Google By turning its neural networks upside-down and teaching them to generate images, Google explains, it can better understand how they operate. Google is asking its networks to amplify what it finds in an image. Sometimes, they just amplify the edges of a shape. Other times, they amplify more complex things, like the outline of a tower in a horizon, a building in a tree, or who's knows what in a sea of random noise (see above). But in each case, researchers can better see what the network is seeing.
"This technique gives us a qualitative sense of the level of abstraction that a particular layer has achieved in its understanding of images," Google says. It helps researchers "visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training." Plus, like Facebook's work, it's kinda cool, a little strange, and a tad frightening. The better computers get at recognizing what's real, it seems, the harder it gets for us.
Senior Writer X Topics artificial intelligence Enterprise Facebook Google neural networks Will Knight Amit Katwala David Gilbert Khari Johnson David Gilbert Andy Greenberg Kari McMahon Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
732 | 2,017 | "Alphabet, Google, and Sidewalk Labs Start Their City-Building Venture in Toronto | WIRED" | "https://www.wired.com/story/google-sidewalk-labs-toronto-quayside" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Aarian Marshall Transportation Alphabet Is Trying to Reinvent the City, Starting With Toronto Sidewalk Labs Save this story Save Save this story Save Google has built an online empire by measuring everything. Clicks. GPS coordinates. Visits. Traffic. The company's resource is bits of info on you, which it mines, packages, repackages, repackages again, and then uses to sell you stuff. Now it's taking that data-driven world-building power to the real world. Google is building a city.
Tuesday afternoon, public officials gathered in Toronto to announce that Sidewalk Labs, a subsidiary under the Alphabet umbrella that also houses Google, will pilot the redevelopment of 12 acres of southeastern waterfront. Today the area hosts a few industrial buildings and some parking lots. In just a few years, it will be a techified community going by the name of Quayside. Sidewalk Labs has already devoted $50 million to the project, and Google will move its Toronto headquarters to the neighborhood. Once the company has proven out its concept, it plans to expand its redevelopment to the entire 800-acre waterfront area.
This will be a fully Google-fied neighborhood, built from scratch, with a touch of Canadian flavor. (Maple-fried bacon? Poutine? Unfailing bilingual politeness?) Sidewalk Labs promises to embed all sorts of sensors everywhere possible, sucking up a constant stream of information about traffic flow, noise levels, air quality, energy usage, travel patterns, and waste output. Cameras will help the company nail down the more intangible: Are people enjoying this public furniture arrangement in that green space? Are residents using the popup clinic when flu season strikes? Is that corner the optimal spot for a grocery store? Are its shopper locals or people coming in from outside the neighborhood? In this distinctly "data is deity" Silicon Valley way, Alphabet joins the grand tradition of master-planned cities, places built from near-nothing with big social goals in mind. Historically, these have not worked out. Walt Disney’s Experimental Planned Community of Tomorrow—Epcot—died with its creator, transformed into a play park rather than viable community. South Korea's Songdo won't be finished until 2020, but the "smart city" has already fallen well short of its business and residential goals. The Brazilian capital of Brasilia is largely the work of one architect, Oscar Niemeyer, and though it’s praised for its beauty and scale it doesn’t quite function as a place. These efforts flop because they never feel quite human. They can't shake the sense that they've been engineered, not grown. “The problem is that it's not a city. It's that simple,” the urban scholar Richard Burdett, an urban planning expert and sociologist, told the BBC about Brasilia. “The issue is not whether it's a good city or a bad city. It's just not a city. It doesn't have the ingredients of a city: messy streets, people living above shops, and offices nearby.” Sidewalk Labs seems well aware of the foibles of technologists building cities, the arrogant optimism that comes with seeing a place and deciding you can do it much better by razing and remaking. The company insists: This redevelopment will be extremely thoughtful. “This is not some random activity from our perspective,” Alphabet Chairman Eric Schmidt said Tuesday. “This is the culmination, from our side, of almost 10 years of thinking about how technology can improve people’s lives.” That long gestating vision verges on the fantastical, with an tinge of Minority Report dystopia. The waterfront redevelopment proposal outlines a community where everybody has their own account, “a highly secure, personalized portal through which each resident accesses public services and the public sector.” Use your account to tell everyone in the building to quiet down, to get into your gym, or to give the plumber access to your apartment while you're at work.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A mapping application will “record the location of all parts of the public realm in real time”—we’re talking roads, buildings, lawn furniture, and drones. Construction will prioritize walkers and bikers, not cars, though shared “taxibots” and “vanbots” will roam the hood. (The company will work with sister company Waymo to iron out those self-driving details.) It will test a new housing concept called Loft, packed with flexible spaces to be used for whatever the community needs. It will experiment with building materials like plastic, prefabricated modules, and timber in the place of steel.
And yes, Sidewalk Labs says it's working on a comprehensive privacy plan.
The company will then crunch the numbers. Sidewalk Labs' data scientists will analyze the firehose of data to figure out what’s working and what’s not. It says it will use sophisticated modeling techniques to simulate “what-if scenarios” and determine better courses of action.
No one's using that park bench, but what if we moved it to a sunnier corner of the park? “Sidewalk expects that many residents, in general, will be attracted by the idea of living in a place that will continuously improve,” the company writes in its project proposal.
That only works if Quayside improves with its human residents in mind. The good news is that Sidewalk Labs’ approach—fast, iterative, and based on observed facts—should take its cues from people, not lofty design principles. In fact, this is academic work that is badly needed: Despite decades of the scholarly research into how cities work, scientists still struggle through gaps in data. Governments mostly collect info about how pedestrians use sidewalks and cyclists use bicycle infrastructure by hand, and then only periodically. Sidewalk Labs could help agencies everywhere crack a few codes.
But this section of Toronto will be a tiny city, not a private company, so Sidewalk Labs faces a particular challenge: building a place that works for all. Alphabet is very good at sucking in personal information and repackaging it to sell stuff. But the stuff, in this case, includes baseline city functions, like garbage collection, safe streets, efficient public transit. “I think the company needs to show that it can provide city services that are not restricted to white, male millennials,” says Sarah Kaufman, who studies transportation and technology at New York University's Rudin Center for Transportation. “That means serving the elderly, the disabled, the poor—all populations that cities serve and private companies do not.” Sidewalk Labs insists it wants to do this. It says it will spend a year hammering out the details of the community with local policymakers, city leaders, academics, and activists. When a local reporter asked CEO Dan Doctoroff about his company’s appetite for integration with the wider Toronto community, he called it “insatiable.” The frictionless tech city, the one that data could build, wants to work for everyone. But feeling like a neighborhood will be the real struggle.
UPDATE 3:05 PM ET 10/19/17: This story's headline has been updated to clarify the relationship between Alphabet and Sidewalk Labs.
Aarian Marshall breaks down how London's power play proves cities can indeed fight Uber For help fighting climate change, cities turn to other cities , says Adam Rogers The world's most bike-friendly cities , ranked by Copenhagenize Design Company Staff Writer X Topics Google Cities Canada Will Knight Kate Knibbs Will Knight Will Knight Nika Simovich Fisher Susan D'Agostino Caitlin Harrington Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
733 | 2,020 | "Alphabet's Sidewalk Labs Scraps Its Ambitious Toronto Project | WIRED" | "https://www.wired.com/story/alphabets-sidewalk-labs-scraps-ambitious-toronto-project" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Aarian Marshall Transportation Alphabet's Sidewalk Labs Scraps Its Ambitious Toronto Project Sidewalk Labs proposed to redevelop a 12-acre parcel of Toronto's waterfront, along Lake Ontario.
Courtesy of Sidewalk Labs Save this story Save Save this story Save When Google sibling Sidewalk Labs announced in 2017 a $50 million investment into a project to redevelop a portion of Toronto’s waterfront, it seemed almost too good to be true. Someday soon, Sidewalk Labs promised, Torontonians would live and work in a 12-acre former industrial site in skyscrapers made from timber —a cheaper and more sustainable building material. Streets paved with a new sort of light-up paver would let the development change its design in seconds, able to play host to families on foot and to self-driving cars.
Trash would travel through underground chutes. Sidewalks would heat themselves. Forty percent of the thousands of planned apartments would be set aside for low- and middle-income families. And the Google sister company founded to digitize and techify urban planning would collect data on all of it, in a quest to perfect city living.
Thursday, the dream died. In a Medium post , Sidewalk Labs CEO Dan Doctoroff said the company would no longer pursue the development. Doctoroff, a former New York City deputy mayor, pointed a finger at the Covid-19 pandemic. “As unprecedented economic uncertainty has set in around the world and in the Toronto real estate market, it has become too difficult to make the … project financially viable without sacrificing core parts of the plan,” he wrote.
Stay in the know with our Transportation newsletter.
Sign up here ! But Sidewalk Labs’ vision was in trouble long before the pandemic. Since its inception, the project had been criticized by progressive activists concerned about how the Alphabet company would collect and protect data, and who would own that data. Conservative Ontario premier Doug Ford, meanwhile, wondered whether taxpayers would get enough bang from the project’s bucks. New York-based Sidewalk Labs wrestled with its local partner, the waterfront redevelopment agency, over ownership of the project’s intellectual property and, most critically, its financing. At times, its operators seemed confounded by the vagaries of Toronto politics.
The project had missed deadline after deadline.
The partnership took a bigger hit last summer, when Sidewalk Labs released a splashy and even more ambitious 1,524-page master plan for the lot that went well beyond what the government had anticipated, and for which the company pledged to spend up to $1.3 billion to complete.
The redevelopment group wondered whether some of Sidewalk Labs’ proposals related to data collection and governance were even “in compliance with applicable laws.” It balked at a suggestion that the government commit millions to extend public transit into the area, a commitment, the group reminded the company, that it could not make on its own.
“The next time this is done by any big tech corporation that wants to reimagine the future of neighborhoods, it will be done in close communication with communities.” Daniel O’Brien, Northeastern University School of Public Policy That chunky master plan may remain helpful, Doctoroff said in his blog post. Sidewalk Labs did serious thinking about civic data management over the course of the two-and-half-year project. As recently as March, Sidewalk Labs executives discussed with WIRED how the company might approach the issue with complete transparency.
(Critics said even those efforts did not go far enough.) Doctoroff says that work—and the work of Sidewalk Labs’ portfolio companies, which seek to tackle various urban mobility and infrastructure problems—will continue.
Still, the project’s end raises questions about the “smart cities” movement , which seeks to integrate cutting-edge tech tools with democratic governance. The buzzwords, all the rage when the adage “data is the new oil” generated fewer eye rolls, suffered during the techlash. Cities and their residents became more suspicious of what Silicon Valley companies might do with their data.
In theory, one way to fix this sort of project is to actually start at the grassroots. “The next time this is done by Sidewalk Labs or any big tech corporation that wants to reimagine the future of neighborhoods, it will be done in close communication with communities,” says Daniel O’Brien, who studies research and policy implications of “big data” at Northeastern University's School of Public Policy.
Paradoxically, the Toronto project’s demise comes as data collection and surveillance are viewed as key tools to slow the spread of the novel coronavirus.
Google codeveloped with Apple technology for smartphones that will automatically track infected patients' encounters with others. The companies say the data will only be recorded anonymously, and the contact tracing regimen may eventually liberate most Americans from sheltering in place. The world is about to go through a major experiment in what can and should be done with data. For now, an abandoned sliver of Toronto won’t be part of it.
The info war over chloroquine has slowed Covid-19 science The rise of a Hindu vigilante in the age of WhatsApp and Modi Newly unemployed, and labeling photos for pennies Sci-Fi has a somber lesson for this crisis The pandemic could be an opportunity to remake cities 👁 AI uncovers a potential Covid-19 treatment.
Plus: Get the latest AI news 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Staff Writer X Topics Cities Simon Hill Amit Katwala Simon Hill Rob Reddick Paresh Dave Kate Yoder Boone Ashworth Simon Hill Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
734 | 2,019 | "Self-Driving Startup Aurora Buys Speed-Sensing Lidar Company | WIRED" | "https://www.wired.com/story/self-driving-startup-aurora-buys-speed-sensing-lidar" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Self-Driving Startup Aurora Buys Speed-Sensing Lidar Company Self-driving car developer Aurora is acquiring Blackmore, whose lidar sensing technology detects not just nearby objects, but their velocity.
Aurora Save this story Save Save this story Save In the race to develop a technology that, at its root, is about teaching robots how to understand their surroundings , Aurora just bought itself a fresh set of eyeballs. The developer of self-driving car technology announced Thursday it’s acquiring lidar maker Blackmore , whose laser scanning tech offers the unusual and very helpful ability not just to detect nearby objects but to discern their velocity. The parties declined to disclose the terms of the deal.
In self-driving, the problems don’t get any bigger than perception. If a robot can reliably know what’s around it, deciding what to do—whether to turn the wheel and which pedal to work, for example—gets a whole lot easier. That’s what has fueled a booming market for lidar, which according to one report will generate more than $8 billion in annual revenue in 2032.
It also explains why Blackmore’s technology stands out among the scores of lidar makers vying for the business of Aurora and its competitors. The Bozeman, Montana–based outfit, which started up a decade ago to do work for the defense industry, uses a “frequency modulated continuous wave” system, also known as a Doppler lidar. When the infrared light hits an object and bounces back, the system determines both how far away it is (based on how long the round trip takes, like any lidar system) and its velocity. Knowing where something is headed and how fast is prized data. It means that if your lidar doesn’t find that object again a millisecond later—hard to guarantee when you’re cruising down the highway and tracking things 250 meters away or more—it can still make a good guess about where it is and where it’s going. Blackmore has at least one Doppler lidar competitor in Aeva , founded in early 2017 by a pair of former Apple engineers.
Aurora, led by a trio of self-driving industry veterans, has teams in Pittsburgh, Palo Alto, and San Francisco.
Kevin Meynell/Aurora Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “These guys are the real deal,” Aurora CEO Chris Urmson says of Blackmore. “They’ve got technology we think no one else has.” The deal requires regulatory approval because Urmson is Canadian.
Urmson led Google’s self-driving car team through its early years, and cofounded Aurora in late 2016 with Sterling Anderson, who helmed the development of Tesla’s Autopilot system , and Drew Bagnell, a machine learning specialist who spent time with Uber’s autonomy wing. The startup hasn’t said much about its business model, but has partnerships with Volkswagen, Hyundai, and electric car startup Byton. In February it landed $530 million in Series B funding , a round led by Sequoia Capital and joined by Amazon. That cash made the Blackmore deal feasible, Urmson says.
The Blackmore team will stay in Bozeman but work closely with Aurora’s perception engineers (based in Palo Alto, San Francisco, and Pittsburgh) once the deal is done, Urmson says. Together, they should find a balance between the kind of data that’s most helpful for a robot learning to drive and what’s possible in the realm of physics. Eventually, they’ll face the question that has beguiled every lidar maker trying to scale up its production: how to mass-produce a lidar that can withstand the rigors of the road, without making it so expensive that not even the hardest-working robo-taxi could amortize its cost.
Aurora is just the latest of its ilk to buy its own lidar maker. In October 2017, GM’s Cruise acquired Strobe and Ford-funded Argo AI snatched up Princeton Lightwave.
Waymo , the company born of the Google effort Urmson cofounded, spent millions of dollars and years developing its own laser system, and in 2017 tried to sue Uber into oblivion to protect its IP. ( They settled after a year-long legal brouhaha.) Meanwhile, startup Luminar has signed deals with two dozen customers, including Toyota, Volvo, Audi, and VW.
And the granddaddy of automotive lidar, Velodyne, whose spinning sensor made its debut at the 2005 Grand Challenge , makes sensors for more than 250 customers, including Uber and many smaller self-driving developers.
Not everyone thinks lasers are key to cracking self-driving.
Anthony Levandowski , the engineer at the center of the Waymo-Uber fight, has a new autonomous trucking company that’s all about using deep learning and camera-based vision to navigate the world.
Elon Musk has called lidar “laaaaame” and insists his Tesla cars will be “fully self-driving” in the near future without the pew-pew.
It’s a tempting vision, because cameras are already cheap and reliable.
Self-driving truck startup TuSimple has developed a camera system that can identify and track other vehicles up to 1,000 meters away, much farther than any lidar senses. Lidar makers, meanwhile, have struggled to find a setup that balances range, resolution, reliability, cost, and the ability to scale up manufacturing.
Urmson, though, speaks for most in autonomy when he says lidar is still a vital tool for making the technology real. Perhaps someday, deep learning software will change that. For now, Aurora’s sticking with the traditional recipe—and doing whatever it can to improve the ingredients.
A dystopian vision of the future: toxic but candy sweet Can a test tell you which pills to pop with just a prick ? What the college scandal shallowfakes say about the rich Melinda Gates to tech: Wake up to women's empowerment My wild ride in a robot race car 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and bluetooth speakers 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Associate Editor Facebook X Instagram Topics Self-Driving Cars Autonomous Vehicles lidar Aarian Marshall Paresh Dave Simon Hill Will Knight Aarian Marshall Boone Ashworth Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
735 | 2,016 | "Elon Musk Promises Self-Driving Autonomous Tesla Motors Cars By the End of 2017 | WIRED" | "https://www.wired.com/2016/10/elon-musk-says-every-new-tesla-can-drive" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jack Stewart Transportation Elon Musk Says Every New Tesla Can Drive Itself Save this story Save Save this story Save Elon Musk wants you to take your hands off the wheel, foot off the gas, and let him do the driving. Rather, let his cars take over. Tonight, at a press conference, he announced that every new Tesla will be fully capable of driving itself.
After being upgraded with a suite of cameras and sensors, Musk says this means his cars will have the potential for level 5 autonomy ---the highest level, which requires zero interaction from the driver.
The current generation of Tesla’s Autopilot is really just advanced cruise control. It can keep the car in its own lane, and avoid driving into the vehicle in front. But try to get off the freeway, let alone navigate down a honking commercial boulevard, and the autopilot is back in driver's ed.
Tesla hopes its ghost in the machine will be fully ready by the end of next year, and the proof will be a cross country road trip. Musk said he could have a Tesla pick someone up from their home in LA and drop them off in the bright lights of Times Square, New York—then park itself. “It will do this without the need for a single touch, including the charger,” says Musk.
Germany Says ‘Nein’ to Tesla Calling Its Tech ‘Autopilot’ Three Ways to Bring Solar Power to the People Who Need It Most Here’s Your First Look at Tesla’s New Autopilot and UI New cars rolling out of Tesla’s Fremont, California, factory will now have eight cameras—up from just one—for full 360 degree vision. Tesla has upgraded the ultrasonic sensors around the car's perimeter, too. And the vehicles have a new computer, boosting the processing power by a factor of 40. “It’s basically a supercomputer in a car,” says Musk. And that's in addition to updated GPS, inertial measurement unit, and other parts of the self-driving central nervous system. All this will be included in the new, more affordable, Model 3, too.
Tesla Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But not for free. As with Tesla’s current “Autopilot convenience features,” turning on that functionality comes at a cost—$8000, up from $3000—even though the hardware upgrades will come standard.
Tesla has been criticized for rolling out autonomous features before the technology is proven. Consumer Reports said Tesla's autopilot upgrades were "too much, too soon ". Just this week, the German government asked the company to stop using the term autopilot , saying that it gives drivers too much confidence, and makes them think the car is more capable than it really is.
Musk ain't hearing all that. Instead of taking a step back, these upgrades are him taking a tire-squealing lurch forward. Full autonomy has always been his end goal, because he asserts it will save lives. This despite the highly publicized death of a Tesla autopilot passenger in Florida earlier this year.
Musk says that is nothing compared to the over 1.2 million people die annually in car accidents when humans are in control. Musk chastised reporters on a press call, saying that if their reporting dissuades people from using autonomous vehicles “then you are killing people.” As always, Elon Musk is incredibly bullish about his timeframes. Tesla's fully autonomous cars will have to be able to avoid pedestrians, deal with busses pulling out, recognize construction workers holding signs, avoid kids running into the street, find parking, swerve to avoid that cyclist that just appeared out of nowhere , and solve every other—practically infinite—complicated driving scenario, to be considered truly level 5. Google has been working towards that same goal since 2009, and is still refining and reworking its the software that pilots its cars around certain cities.
That company's robo-cars recently hit a cumulative 2 million miles , and it is still pretty cagey about a full roll out.
Not that Tesla is going from zero to 60 on this; the company learns a bunch from its full fleet of vehicles. Every car, even those in customers' hands, collects data and sends it back to the company's headquarters, where engineers analyze and refine the system. Still, the major automakers like Ford, Mercedes, and others are giving a 2020 to 2025 timeframe for their cars to become self-driving.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Musk admits his roll-out will be slow. Cars with the new tech will actually have fewer features than current Teslas. Active cruise control and lane holding (which make up the current Autopilot) won’t work until the cars have collectively racked up millions of miles of real-world driving.
Then, Tesla will update those features with over-the-air updates. The newer vehicles should be as capable as the existing ones by December, Musk says. Stand by: Musk has a track record of missing his own deadlines—even those that are self-imposed, and self-assessed to be lenient.
Then, if you believe it, comes the truly tricky part: Advanced self-driving. But not all at once. The cars will start small, perhaps by recognizing traffic lights first, and then graduating to four-way stops. Each feature will enter alpha road trials—which include on Musk's own car—only after meeting standards set by Tesla's engineers.
Only then will the updates be pushed to a wider group of cars. But still, the features will run in the background—so-called “shadow mode”—where the computer compares the actions it would have taken to what the driver does. Eventually, once Tesla engineers deem the software is safer than a human driver, the computer will have the power to take full control.
Automotive engineers agree that self driving cars will come sooner or later. Musk just wants to make it sooner. Much sooner.
Topics Electric Vehicles Elon Musk Self-Driving Cars Tesla Morgan Meaker Vittoria Elliott Matt Burgess Morgan Meaker Will Knight Vittoria Elliott Vittoria Elliott Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
736 | 2,023 | "The Andy Warhol Copyright Case That Could Transform Generative AI | WIRED" | "https://www.wired.com/story/andy-warhol-fair-use-prince-generative-ai" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Madeline Ashby Culture The Andy Warhol Copyright Case That Could Transform Generative AI Photograph: Mark Sink/Getty Images Save this story Save Save this story Save Andy Warhol probably never said that thing about everyone in the future getting their 15 minutes of fame. It might have been Swedish art collector Pontus Hultén. Or painter Larry Rivers. Or photographer Nat Finkelstein.
Warhol is the household name, though, so he gets the credit. But he did say this: “Being good in business is the most fascinating kind of art.” Warhol won his first advertising award in 1952. His client base included Tiffany & Co., Columbia Records, and Vogue.
He knew the value of commercial licensing. He was also an avid fan of new technologies: Polaroid kept its SX-70 model in production specifically for him; in 1985, he painted Debbie Harry with a Commodore Amiga when digital art was otherwise unheard of. If Warhol were alive today, he’d likely be tinkering with generative AI —if he could keep the rights to what it produced.
Poke the Bear Will Bedingfield Art Official Angela Watercutter Copyfight Will Knight The US Copyright Office determined recently that art created solely by AI isn’t eligible for copyright protection. Artists can attempt to register works made with assistance from AI, but they must show significant “ human authorship.
” The office is also in the midst of an initiative to “examine the copyright law and policy issues raised by artificial intelligence (AI) technology.” Currently a trio of artists is suing Midjourney, Stable Diffusion maker Stability AI, and DeviantArt, claiming that the tools are scraping artists’ work to train their models without permission. Last week, all three companies filed motions to dismiss , claiming that AI-generated images bear little resemblance to the works they're trained on and that the artists didn't specify which works were infringed. The artists are being represented by Matthew Butterick and the Joseph Saveri Law Firm, which also filed a class action against OpenAI, GitHub, and GitHub’s parent company Microsoft for allegedly violating the copyrights of coders whose work was used to train the Copilot programming AI, part of the “no-code ecosystem.” Getty Images filed a suit in January against Stability AI claiming “brazen infringement” of its image licensing catalog.
At the heart of many of these debates about AI’s impact on creative fields are questions of fair use. Namely, whether AI models trained on copyrighted works are covered, at least in the US, by that doctrine. Which is why we’re talking about Warhol. This spring, the US Supreme Court is expected to rule on Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith , a case that will determine whether a series of images Warhol created of Prince were adequately transformative, under the fair use doctrine of the Copyright Act, of the photograph he used for reference. Put another way, the court that overturned Roe v. Wade is being asked to determine when an act of creation begins. Legal scholars everywhere are watching.
“Quite obviously this court has no trouble upending precedent,” says Rebecca Tushnet, a professor at Harvard Law School and founding member of the Organization for Transformative Works who submitted an amicus brief in the case backing the Warhol Foundation. “Anything could happen.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The prelude to the case is a long one. In 1981, Lynn Goldsmith photographed Prince in her studio. In 1984, Vanity Fair (which, like WIRED, is a Condé Nast publication) licensed that photo for artistic reference. The artist was Andy Warhol. Warhol’s work became the magazine’s November cover, with Goldsmith given a photography credit. Between 1984 and 1987, Warhol created the “Prince Series,” again referencing Goldsmith’s photograph, for 15 additional images. Between 1993 and 2004, the Warhol Foundation sold 12 of Warhol’s Prince works and transferred the remaining four to the Andy Warhol Museum, while exploiting the commercial licenses to the images for merchandise.
Following Prince’s death in 2016, Condé Nast published a special issue commemorating his passing and licensed Warhol’s “Orange Prince” from the Foundation for $10,250, without crediting Goldsmith. Discovering this and the “Prince Series” itself, Goldsmith contacted the Warhol Foundation, which sued her, preemptively, claiming fair use. Goldsmith countersued for infringement. In 2019, a federal district court ruled in the foundation’s favor. But in 2021, the Second Circuit Court of Appeals sided with Goldsmith. The Supreme Court heard the case in October 2022. As of this writing, the court hasn’t released its decision.
“There’s a version of this case where it’s so obviously a derivative work,” says Ryan Merkley, managing director at Aspen Digital and chair of the Flickr Foundation. Goldsmith’s photo was provided for a single use but was used multiple times. “Why didn’t Goldsmith get paid for the thing she got paid for the first time?” The case has confounded observers, attorneys, and artists. It’s difficult to know whether Warhol appreciated Goldsmith’s contribution to the Prince series or how Prince felt about Warhol’s use of his likeness. Ultimately, those questions may never be answered. But what the Court must decide is whether Warhol’s piece is a significant transformation of Goldsmith’s photograph, and thus protected by fair use, or if it’s copyright infringement. Either way, the decision could greatly impact how copyright law is applied to what AI tools do with human-made works.
For years, the “sweat of the brow” doctrine within intellectual property law protected the effort and expense required to create something worthy of copyright. The phrase comes from English translations of Genesis 3:19: “In the sweat of your face you will eat bread until you return to the ground, for out of it you were taken. For dust you are and to dust you will return.” This is the New World Translation, the Bible used among Jehovah’s Witnesses like Prince. In a 1999 interview with Larry King, Prince said, “I like to believe my inspiration comes from God. I’ve always known God is my creator. Without him, nothing works.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It may seem strange to consult the Bible for guidance on intellectual property law, but much abolitionist argument arose from the belief that humans were, as the Constitution says, “endowed by their Creator with certain unalienable Rights.” In 1857, the Commissioner of Patents refused Oscar J. E. Stuart a patent on a “double plow and scraper” designed by an enslaved man named Ned. The commissioner also denied Ned the patent. Without legal personhood, Ned couldn’t hold a patent or property. The short-lived Confederate States Patent Office granted slaveholders the rights to the intellectual property of the people they enslaved. The Confederacy’s position was that enslaved people weren’t entitled to the results of their physical and intellectual labor. Patents and copyrights are handled differently under US law, but the case is instructive of how labor factors into matters of intellectual property.
The “sweat of the brow” doctrine stuck around until at least 1991, when the Supreme Court ruled in Feist Publications, Inc. v. Rural Telephone Service Co.
that “simple and obvious” collections of facts, like phone books, no matter how onerous they were to collate, were not worthy of copyright. In 2016, the court declined the Authors Guild’s request to review the Second Circuit’s ruling on Google Books’ mass digitization project. By declining, the court left the Second Circuit’s opinion in place: Scraping, at least in the way Google Books does it, is fair use.
Then, in 2021, the Supreme Court reaffirmed this stance by ruling 6-2 that Google’s use of Java code and APIs for Android was also fair use.
The fair use doctrine relies on four measures that judges consider when evaluating whether a work is “transformative” or simply a copy: the purpose and character of the work, the nature of the work, the amount taken from the original work, and the effect of the new work on a potential market. This is why your epic Zutara fanfic is deemed noncompetitive with Avatar: The Last Airbender.
It’s a different format and noncommercial.
“Copyright is a monopoly, and fair use is the safety valve,” says Art Neill, director of the New Media Rights Program at California Western School of Law. Everything from true-crime podcasts to Twitter dunks rely on fair use. It’s the doctrine that makes possible every “ENDING EXPLAINED!!1!” video you’ve watched after killing a bottle of pinot on Sunday night. It’s also why Americans can share videos of police brutality. Cara Gagliano, staff attorney at the Electronic Frontier Foundation, calls it “a particularly important tool for anyone who speaks truth to power.” The EFF filed an amicus brief in the case, siding with the Warhol Foundation. “It protects your right to criticize and critique the works of others.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Warhol had many muses, but fame was his most enduring. He made figurative icons into literal ones. Much like an actor rehearsing the same monolog by emphasizing different words, Warhol often repeated images: Marilyn Monroe, Elvis, Jesus. This established precedent for other works, like Shepard Fairey’s reinterpretation of a photo by Mannie Garcia, which became the “Hope” poster during Barack Obama’s 2008 presidential campaign. (The Associated Press, which held the license to Garcia’s photo, asked Fairey for a licensing fee in 2009. In turn, Fairey sued for a fair use declaratory judgment. They settled out of court in 2011.) By insisting that transformative works at minimum must “comprise something more than the imposition of another artist’s style,” the Second Circuit seemingly expected Warhol to “ print the legend.
” But in all likelihood, Warhol didn’t print it. At his Factory, acolytes were constantly at work executing Warhol’s vision. This method of production was central to Warhol’s project as an artist. His position that “being good in business is the most fascinating kind of art” has influenced artists like Keith Haring and Tom Sachs and groups like Meow Wolf and the Museum of Ice Cream. In the age of generative AI, it has a whole new relevance.
“Copyright is meant to be an incentive for creation, and AIs don’t need that incentive,” says Merkley. “I think if you let AIs make copyright, it will be the end of copyright, because they will immediately make everything and copyright it.” To illustrate this, Merkley describes a world where AI systems make every potential melody and chord change and then immediately copyright them, effectively barring any future musician from writing a song without fear of being sued. This is why, he adds, “copyright was meant for humans to make.” Now imagine that same tactic applied to prescription drug formulations or computer chip architecture. And that’s where steering the massive ship that is copyright runs into choppy waters. Copyright is a keystone in global trade agreements: The North American Free Trade Agreement, the Trans-Pacific Partnership, and others rely on a shared recognition of copyright between nations. Granting AI copyright would fundamentally alter trade policy. It could further erode or destabilize international relations.
“AI is funded by extremists,” says technology entrepreneur and Prince fan Anil Dash. He points out that the investment capital required to create and develop artificial intelligence at scale is so huge that only a handful of people or companies could access it, and now they have total control of the technology. The extractive practice of training large language and image models on the collective commons of the internet without consent is, after all, no different from taking advantage of public roads to drive for Uber or Lyft.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Their feeling is, any obstacle that is legal, procedural, policy-based, especially judicial or legislative, is a temporary distraction, and they can just throw money at that for a few years and make it go away,” Dash says.
“The no-code ecosystem is in general focused on extractive uses of technology,” says Kathryn Cramer, a science fiction editor and AI researcher at the Computational Story Lab at the University of Vermont. “There may be great things that can be accomplished with AI, but in the short term, what’s going to happen is a massive effort for people to make large amounts of money … as fast as possible, with as shallow as possible an understanding of the technology.” Like Warhol and Prince, Goldsmith’s work is iconic. After becoming the youngest member of the Directors Guild of America, and co-managing Grand Funk Railroad, she started an image licensing company. Decades before DSLR, Goldsmith carried cameras, lenses, film, and lights on her back, while standing for hours offstage. She kept shooting through the awful moment in 1977 when Patti Smith broke her neck onstage in Tampa. And in 1981, she took a photo of Prince that Warhol used to create an iconic and valuable series of images.
Prince himself vigorously defended both his image and his work. In 1993, during his fight to leave his contract with Warner Bros., he changed his name to a genderless, unpronounceable symbol. His press release said : “Prince is the name that my mother gave me at birth. Warner Bros. took the name, trademarked it, and used it as the main marketing tool to promote all of the music that I wrote.” As negotiations dragged, he wrote “SLAVE” on his cheek during performances. He called his next album Emancipation.
Speaking about it to Spike Lee in Interview magazine (itself cofounded by Warhol), Prince said, “You know, I just hope to see the day when all artists, no matter what color they are, own their masters,” referring to the very same type of master recordings (and rights agreements) that later caused Taylor Swift to rerecord entire albums.
This approach extended to the use of his likeness. Later in life, Dash says, Prince licensed images of himself so that he could ensure Black photographers earned the royalties. And he refused collaboration with artists who weren’t equally savvy. “He used to tell fans,’” Dash says, “‘if you don’t own your masters, your master owns you.’” You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics Copyright Intellectual Property Patents Tech Policy and Law artificial intelligence art Photography Jason Parham Kate Knibbs Vauhini Vara Kate Knibbs Gideon Lichfield Virginia Heffernan Lindsay Jones Jason Parham Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
737 | 2,016 | "Ford Says It’ll Have a Fleet of Fully Autonomous Cars in Just 5 Years | WIRED" | "https://www.wired.com/2016/08/ford-autonomous-vehicles-2021" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Ford Says It’ll Have a Fleet of Fully Autonomous Cars in Just 5 Years FORD Save this story Save Save this story Save More than a century after introducing the Model T, Ford hopes to once again change how the masses move.
The company announced this morning that it will have thousands of fully autonomous vehicles in urban car-sharing and ride-hailing fleets by 2021. To achieve that goal, the company will double, to 300, the number of people at its Silicon Valley research center and add 60 autonomous vehicles to the fleet of 30 already deployed there.
The five-year timeline isn't terribly aggressive. Google, Nissan, and Mercedes-Benz see autonomous vehicles on the road by 2020, and Chinese tech giant Baidu says it will have the technology in 2019. But none of them has made promises as specific as those Fields made today. "We see the upcoming decade for the automobile really centered around the automation of the automobile," CEO Mark Fields says.
Many automakers already offer limited automation in vehicles like the Tesla Model S and Model X ( which have had trouble ), and the Mercedes E-Class ( that one's a bit confusing ). But Ford, like Google, wants nothing less than full autonomy , a designation the Society of American Engineers calls "Level 4." Fields says the fleet in Silicon Valley won't even have steering wheels or pedals.
Ford seems to have most of the pieces needed to do that. In July, it invested in Civil Maps, a Berkeley, California, startup that makes the software needed to turn LiDAR data into maps robo-cars can read and automakers can update. The automaker just invested heavily in LiDAR manufacturer Velodyne, a bid to cut make the technology far more affordable ---the spinning bucket atop each Google car costs about $85,000. And it signed an exclusive licensing deal with Nirenberg Neuroscience to use that company's machine vision and deep learning tech. Israeli startup Saips will provide further help there with technology that helps robo-cars identify pedestrians, garbage cans and the like.
The big question is who will run the fleet. GM has signed on with with Lyft, while Uber is doing its own research, and Apple---which won't confirm it's developing a car---invested $1 billion in Chinese ride-sharing service Didi.
Ford doesn't have a dance partner, but, Fields says, "We have lots of options, and we talk with everyone." Field's won't say what the car might look like or where you'll see them beyond "dense urban areas." But the commitment underscores how serious Ford is about autonomy. "It fits very nicely with who we are as a company," he says. "Autonomous vehicles could potentially have the same impact on society that Henry Ford's moving assembly line had." The company that made the Model T ubiquitous wants to do the same with autonomous tech.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Associate Editor Facebook X Instagram Topics automobiles cars Self-Driving Cars Super Bowl Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
738 | 2,018 | "Radars, Cameras, and Lidar: How Self-Driving Cars See the Road | WIRED" | "https://www.wired.com/story/the-know-it-alls-how-do-self-driving-cars-see" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation How Do Self-Driving Cars See? (And How Do They See Me )? Hotlittlepotato Save this story Save Save this story Save Our in-house Know-It-Alls answer questions about your interactions with technology.
Q: How Do Self-Driving Cars See? A: It’s a sunny day, and you’re biking along one of Mountain View’s tree-lined esplanades. You head into a left turn, and before you change lanes, you crane your head around for a quick look back. That’s when you see it. The robot. Chugging along behind you, in that left lane you’re aiming to call your own. Your pressing question— Does it see me? —is answered when the vehicle slows down, giving you plenty of space. And so now you wonder, how did it do that? How, exactly, do self-driving cars see? Perhaps unwittingly, you’ve hit on a crackler of a question. Making a robot that perceives its surroundings—not just spotting that lumpy mass but understanding it’s a child someone has put actual time and effort into—is the main challenge of this young industry.
Get the thing to understand what’s going on around it as well as humans do, and the process of deciding how to apply the throttle, brake, and steering becomes something like easy.
Dozens of companies are trying to build self-driving cars and self-driving car technology, and they all approach the engineering challenges differently. But just about everybody relies on three tools to mimic the human’s ability to see. Take a look for yourself. (Be careful—you’re on a bike, remember?) Radar We’ll start with radar, which rides behind the car’s sheet metal. It’s a technology that has been going into production cars for 20 years now, and it underpins familiar tech like adaptive cruise control and automatic emergency braking. Reliable and impervious to foul weather, it can see hundreds of yards and can pick out the speed of all the objects it perceives. Too bad it would lose a sightseeing contest to Mr. Magoo. The data it returns, to quote one robotics expert, are “gobbledegook.” It’s nowhere near precise enough to tell the computer that you’re a cyclist, but it should be able to detect the fact that you’re moving, along with your speed and direction, which is helpful when trying to decide how to avoid slicing your bike into a unicycle.
Cameras Now, gaze upon the roof. Up here, and maybe dotting the sides and bumpers of the car too, you’ll find the second leg of this sense-ational trio.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The cameras—sometimes a dozen to a car and often used in stereo setups—are what let robocars see lane lines and road signs. They only see what the sun or your headlights illuminate, though, and they have the same trouble in bad weather that you do. But they’ve got terrific resolution, seeing in enough detail to recognize your arm sticking out to signal that left turn. That’s so vital that Elon Musk thinks cameras alone can enable a full robot takeover. Most engineers don’t want to depend on just cameras, but they’re still working hard on the machine-learning techniques that will let a computer reliably parse a sea of pixels. Seeing your arm is one thing. Distinguishing it from everything else is the tricky bit.
Lidar If you spot something spinning, that’ll be the lidar. This gal builds a map of the world around the car by shooting out millions of light pulses every second and measuring how long they take to come back. It doesn’t match the resolution of a camera, but it should bounce enough of those infrared lasers off you to get a general sense of your shape. It works in just about every lighting condition and delivers data in the computer’s native tongue: numbers. Some systems can even detect the velocity of the things it sees, which makes deciding what matters far easier. The main problems with lidar are that it’s expensive, its reliability is unproven, and it’s unclear if anyone has found the right balance between range and resolution. The 50-plus companies developing lidar are working to solve all of these problems. (Oh, and they don’t always spin.) Some outfits also use ultrasonic sensors for close-range work (those are what let your car beep you into madness when you’re backing into a tight space) and microphones to listen for sirens, but that’s just icing on the cake.
Once the sensors pull in their data, the car’s computer puts it all together and starts the hard part: identifying what’s what. Is that a toddler or a garbage can? A leaf or a pigeon? A teen riding a scooter or a Wacky Waving Inflatable Arm-Flailing Tubeman ? Better hardware makes answering such questions easier, but the real work here relies on machine learning—the art of teaching a robot that this cluster of dots is an old man using a walker, and that swath of pixels is a three-legged dog. But once it knows how to see, the question of how to drive gets easy: Don’t hit either one of them.
Alex Davies is the editor of WIRED’s transportation section and routinely finds himself cycling on streets populated by robot cars, which he really, really hopes see as well as the techies promise.
What can we tell you? No, really, what do you want one of our in-house experts to tell you? Post your question in the comments or email the Know-It-Alls.
Luxuriate in this teardown of a 1974 Harley Davidson Lock down what websites can access on your computer Quantum physicists found a new, safer way to navigate What a school bus schedule can teach us about AI PHOTOS: The scrap yards sending copper to China Get even more of our inside scoops with our weekly Backchannel newsletter Senior Associate Editor Facebook X Instagram Topics The Know-It-Alls Self-Driving Cars lidar Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
739 | 2,018 | "Luminar's New Lidar Could Dominate the Self-Driving Car Market | WIRED" | "https://www.wired.com/story/luminar-lidar-self-driving-cars" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Luminar's New Lidar Could Bring Vision to Every Robocar in the World The receiver in Luminar's lidar unit, which detects when the laser pulses bounce back, is the size of a strawberry seed and costs just $3.
Luminar Save this story Save Save this story Save Self-driving cars are nearly ready for primetime , and so are the laser sensors that help them see the world.
Lidar , which builds a 3-D map of a car’s surroundings by firing millions of laser points a second and measuring how long they take to bounce back, has been in development since 2005, when a guy named Dave Hall made one for the Darpa Grand Challenge, an autonomous vehicle contest.
In the decade-plus since then, if you wanted a lidar for your self-driving car , Velodyne was your only choice.
Yet Velodyne’s one-time monopoly has eroded in recent years, as dozens of lidar startups came to life, and robocar makers found their own way. Google’s sister company Waymo put years and millions of dollars into developing a proprietary system.
General Motors bought a lidar startup called Strobe.
Argo AI, which is making a robo-driving system for Ford , acquired one called Princeton Lightwave.
The latest challenger is Luminar, the Silicon Valley-based startup that already has a deal with Toyota , plus three more manufacturers it declines to name. Today, Luminar is announcing the introduction of its newest lidar unit, with a 120-degree field of view (that’s enough to see what’s ahead of the car, but you’d need a couple to get a 360-degree view). And after a first production run of just 100 units, it’s ready to start cranking them out by the thousand—more than enough to meet today’s demand. And maybe, enough to make self-driving cars cheaper for everybody.
“By the end of this year, we’ll have enough capacity to equip pretty much every autonomous test and development vehicle on the road, globally,” says CEO Austin Russell, who dropped out of Stanford in 2012 when he was 17 years old to make Luminar his full-time gig. “This is no longer being built by optics PhDs in a handcrafted process. This is a proper automotive serial product.” In its 136,000 square foot facility in Orlando (an optics industry hub), the company has dropped the build time for a single unit from about a day, to eight minutes. In the past year, it has doubled its staff, to about 350. It hired Motorola product guru Jason Wojack to head its hardware team. Alejandro Garcia came over from major auto industry supplier Harman to run manufacturing.
Luminar is playing catch up here. Last year, Velodyne opened a “megafactory” to ramp up production and built 10,000 laser sensors. President Marta Hall says it could build a million a year if it wanted to. But the ability to build lots of lidars isn’t enough to win here.
Lidar is a fantastic sensor —it’s more precise than radar and works in more conditions than cameras do—but it’s way too expensive. Velodyne’s top shelf unit, which sees in 360 degrees with a 300-meter range, costs about $75,000 a piece. Buying in bulk will drop that cost, but that’s still a hard price tag to bear, even on a fleet vehicle that can amortize costs over years of service.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At its Orlando production facility, Luminar can now make a lidar unit in about eight minutes—it used to take a day.
Luminar Luminar made the cost question harder by making its lidar’s receiver (the bit that acts like your eye’s retina) out of indium gallium arsenide (InGaAs) instead of silicon. Why is this important? Well, to make your lidar “see” farther, you have to fire more powerful pulses of light. They have to be powerful so they have the strength to hit faraway objects and make it all the way back. Most lidars use lasers at the 905 nanometer wavelength. That’s invisible to humans. But if it hits an actual eyeball, like yours, with enough power, it can damage the retina.
If you want to fire more powerful pulses (and have your lidar “see” farther) without blinding actual people, you can use the 1550 nanometer wavelength, which is further into the infrared part of the spectrum, and thus can’t penetrate a human eyeball.
Which brings us back to silicon. Receivers made of silicon, which is cheap, can’t detect light at the 1550 wavelength. InGaAs can, but it’s far more expensive. So the industry standard is to use silicon, run at 905 nanometers, and accept you just can’t send your lasers all that far.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Russell insisted on the extra power, which meant 1550 nanometers, which meant using a receiver made of InGaAs. As a result, he can fire pulses 40 times more powerful than what his competitors shoot, so his lidar can see extremely dark objects—the kind that can absorb 95 percent of light—even from 250 meters away. He says no one’s lidar can see so well at such distance.
But seriously, InGaAs, as the French say, coute la peau des fesses*.
A receiver array about the size of a big potato chip can cost tens of thousands of dollars, Russell says. So Luminar built its own. The result, now in its seventh iteration, is about the size of a strawberry seed. (The entire unit, including the laser and accompanying electronics, is about half a foot square and three inches deep.) That includes the chip that calculates, down to the second, how long the photon has been out in the world. It costs a piddling $3, obliterating Luminar’s cost concerns while allowing for that extra range and resolution. Russell wouldn’t reveal an exact price for the lidar as a whole, but says his customers are quite pleased. And when they’re finally ready to start offering you rides in their robo-taxis, maybe they won’t have to charge you as much for that trip home from the bar.
Luminar’s R&D team also managed to increase the “dynamic range” of the receiver. Just like how your pupils dilate based on light conditions, lidar receivers are tuned to pick up pulses of a certain strength (the farther a photon goes before bouncing back, the weaker it becomes). If you set it to look for faint signals and it gets hit by a much stronger pulse, you can fry the receiver. “We have countless blown-up detectors,” Russell says. The current unit can handle a much greater range of pulse strengths, without even a wisp of smoke.
Meanwhile, Luminar’s already working on the next generation sensor. That one, Russell says, will be affordable enough to put in consumer cars—making the gift of sight little more than a commodity.
Want to know when self-driving cars will be ready? You're asking the wrong question The lose-lose ethics of testing robo-cars on public streets The unavoidable folly of making humans supervise self-driving cars Senior Associate Editor Facebook X Instagram Topics lidar Self-Driving Cars David Nield Nena Farrell Brendan Nystedt Brenda Stolyar Scott Gilbertson Simon Hill Eric Ravenscraft Julian Chokkattu Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
740 | 2,018 | "Baraja's New Lidar Uses Rainbow Physics to Help Self-Driving Cars See | WIRED" | "https://www.wired.com/story/baraja-lidar-prism-self-driving-cars" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Baraja's New Lidar Uses Rainbow Physics to Help Self-Driving Cars See By making tiny adjustments to the wavelength of the infrared pulses it fires, Baraja's lidar dictates the angle at which they exit a prism—and the direction they take into the world.
Baraja Save this story Save Save this story Save In the land of the self-driving vehicle , the car with the best lidar sensor is king. So goes the logic of the booming self-driving car industry. To drive safely, an autonomous vehicle needs to see the world around it, and the best way to do that is with a system that fires millions of pulses of light every second, measuring how long they take to bounce off nearby objects and building a detailed 3-D map.
Lidar, however, is hard. It’s a young technology—the first application designed specifically for driving dates to 2005—and remains expensive and unproven when it comes to the automotive grade reliability the car industry requires. That’s why dozens of lidar makers have emerged in recent years, each claiming they’ve got the laser-flinging solution that offers the right balance of range, resolution, robustness—and cost.
The latest newcomer to light up the dance floor is Baraja, an Australian startup founded by two former telecom workers. The key to their system? Prisms. Prisms and fiber optic cables.
One of the key challenges engineers face when they’re designing a lidar is how to move the laser back and forth, up and down, which is what it needs to do to take in all its surroundings. Velodyne, the oldest and biggest player in the market , sticks as many as 128 lasers into its sensor and spins the whole thing around 64 times per second. Luminar, a growing startup, with a pair of oscillating, dime-sized mirrors.
The argument against such setups is that moving parts add complexity, and that they’ll only handle the rigors of the road for so long before breaking down.
Baraja proposes a novel, mechanically simpler way to direct its laser sight. If you were paying attention in science class, you know white light going into a prism comes out divided into the colors of the rainbow on the other side. The order of that rainbow is based on the wavelength of each color. Red (around 700 nanometers) sits above orange (around 600). Indigo (420 to 440 nanometers) goes above violet (around 400).
The Australian lidar company uses this phenomenon to its advantage by shooting its single laser through what CEO Federico Collarte calls a prism-like material. He wouldn't provide details, but explains it's a sort of lens that refracts infrared light the way prisms do visible light. By making tiny adjustments to the wavelength of the infrared pulses it fires (all of them around 1550 nanometers), it dictates the angle at which they exit the glass—and the direction they take into the world. If it wants to focus its attention on one bit of the scene, it simply keeps pumping out pulses of light at the appropriate wavelength.
Baraja’s co-founders, Collarte and CTO Cibby Pulikkaseril, borrowed the idea from the telecoms industry, where they both worked until a few years ago. There, a technique called wavelength division multiplexing allows one optical fiber to carry a bunch of signals, each on a different wavelength. Prisms are one tool for combining and separating those signals. Collarte and Pulikkaseril saw a growing need for reliable lidar in the nascent self-driving car industry, and realized the tech they were already working on could be stuck on the roof of a car. In July of 2015, they launched Baraja. Now, with the fourth iteration of their system, they’re ready to show the world what they’ve made.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We didn’t invent this laser, we didn’t invent prisms,” says Collarte. “We’re just taking mature concepts from telecoms and moving them to a new market.” A lucrative market. An April report from Woodside Capital Partners predicts the lidar industry will be worth close to $10 billion by 2032, as cars with varying levels of automation approach ubiquity.
Baraja’s lidar carries a second design quirk that its creators think sets them apart. Because its sensor has a limited field of view, it needs to fire lasers from various points on the car to see all its surroundings. Competing systems do that by installing a bunch of lidars, each carrying its own laser (or lasers). Baraja uses one laser per car, sitting inside a box the size of a wireless router. From deep inside the car, it fires its pulses of light, which travel through fiber optic cables to the several prisms that sit inside blue plastic casing at various points on the exterior of the car.
Baraja's single laser sits inside a box the size of a wireless router. The pulses of light it generates travel through fiber optic cables to the several prisms that sit inside blue plastic casing at various points on the exterior of the car. This setup, CEO Federico Collarte says, improves reliability and keeps maintenance costs low.
Baraja The key advantage to this cyclopean setup is cost. You’re only paying for one main laser unit, and if one of those exterior units gets banged up by hail, a fender bender, or a malevolent pedestrian, it’s cheap and easy to replace. “You don’t want to pay an eye and a kidney for maintenance,” Collarte says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Baraja’s lidar can see objects that reflect just 10 percent of light—think a pedestrian wearing black on a poorly lit street—from an impressive 240 meters, Collarte says. A self-driving developer would use an open API to program the laser, and would be responsible for analyzing the data that the system gathers. The Sydney-based company (which also has offices in Silicon Valley and China) took its name from the Spanish word for a deck of cards, whose size they hope to match with their laser unit. (It’s also the word for shuffle , which Collarte says goes nicely with their constant variation of laser wavelengths.) The novelty of the system, though, means potential customers will need to do their due diligence. The prism setup can only move the beams of light up or down; Baraja still relies on what Collarte calls a “mechanical aid” to move them left or right (he wouldn’t provide details). The need to run fiber optic cables through a car could be a pain point for automakers. And while Collarte promises a cost-competitive system, he has to fight off a pile of competitors for each customer.
Still, the lidar market is growing fast and remains wide open. No one company is likely to dominate, especially since different applications of self-driving tech will require different sorts of vision systems, says Shahin Farshchi, a partner at the venture capital firm Lux Capital who has invested in lidar-maker Aeva, as well as self-driving startup Zoox.
“It’s hard to imagine a one-size-fits-all product or technology.” Baraja may not have a perfect system, but when it comes to the gift of sight, it’s willing to bet that one laser—and a few prisms—is all you need.
The political education of Silicon Valley The challenge of teaching helicopters to fly themselves This fake conspiracy site is fully post-parody Inside the moonshot factory building the next Google How to free up space on your iPhone Hungry for even more deep-dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Associate Editor Facebook X Instagram Topics lidar Self-Driving Cars auto industry Aarian Marshall Simon Hill Andrew Couts Justin Ling Elana Levin Simon Hill Brendan Nystedt Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
741 | 2,006 | "Say Hello to Stanley | WIRED" | "https://www.wired.com/2006/01/stanley" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Joshua Davis Say Hello to Stanley Save this story Save Save this story Save Sebastian Thrun is sitting in the passenger seat of a 2004 Volkswagen Touareg that's trying to kill him.
The car hurtles down a rutted dirt road at 35 miles per hour somewhere in the Mojave Desert, bucking and swerving, kicking up a cloud of dust. Thrun, the youngest person ever to head Stanford's famed artificial intelligence laboratory, clings to an armrest. Mike Montemerlo, a speed-coding computer programmer and postdoc, is wedged in the backseat amid a tangle of wires and cables.
No one is driving. Or more precisely, the Touareg is trying to drive itself. But despite 635 pounds of gear - roof-mounted radar, laser range finders, video cameras, a seven-processor shock-mounted computer - the car is doing a lousy job. Thrun tightens his grip on the armrest. He's built plenty of robots, but he's never entrusted his life to one of his creations. He's scared, confused, and above all furious that his algorithms are failing.
Suddenly the steering wheel spins itself hard to the left and the car speeds toward a ditch. David Stavens, a programmer who is stationed in the driver's seat in case of emergency, grabs the wheel and fights the pull of the robotic autopilot, which is insisting on a plunge into the gulley. Stavens slams his foot down on the computer-controlled brake. Thrun hits the big red button on the console that disables the vehicle's navigation computers. The SUV skids to a halt. "Hey, that was exciting," Thrun says, trying to sound upbeat.
It wasn't supposed to be this way. In 2003, the Defense Advanced Research Projects Agency offered $1 million to anyone who could build a self-driving vehicle capable of navigating 300 miles of desert. Dubbed the Grand Challenge, the robot-vehicle race was hyped for months. It was going to be as important as the 1997 Kasparov-Deep Blue chess match. But on race day in March 2004, the cars performed like frightened animals. One veered off the road to avoid a shadow. The largest vehicle - a 15-ton truck - mistook small bushes for enormous boulders and slowly backed away. The favorite was a CMU team that, fueled by multimillion-dollar military grants, had been working on unmanned vehicles for two decades. Its car went 7.4 miles, hit a berm, and caught fire. Not a single car finished.
Back at Stanford, Thrun logged on to check the progress of the race and couldn't believe what he was seeing. It was a humiliation for the entire field of robotics - a field Thrun was now at the center of. Only a year before, he'd been named head of Stanford's AI program. In the quiet halls of the university's Gates Computer Science Building, the suntanned 36-year-old German was a whirlwind of excitement, ideas, and brightly colored shirts. He was determined to show what intelligent machines could contribute to society. And though he had never considered building a self-driving car before, the sorry results of the first Grand Challenge inspired him to give it a try.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg He assembled a first-rate team of researchers, attracted the attention of Volkswagen's Palo Alto R&D team, and charged ahead. But here in the desert, he's facing the reality that the Touareg - dubbed Stanley, a nod to Stanford - is totally inadequate. With only three months to go before the second Grand Challenge, he realizes that some basic problems remain unsolved.
Thrun gets out to kick the dirt on the side of the road and think. While the car idles, he squints at the uneven terrain ahead. This was his chance to lead the way toward his vision of the new vehicular order. But for now, all he sees is mountains, sagebrush, and sky.
It started with a black-and-white videogame in 1979. Thrun, then 12, was spending most of his free time at a local pub in Hannover, Germany. The place had one of the first coin-operated videogames in town, and 20 pfennig bought him three lives driving at high speed through a stark landscape of oil slicks and oncoming cars. It was thrilling - and much too expensive. For weeks, Thrun scrutinized the graphics and then decided that he could re-create the game on his Northstar Horizon, a primitive home computer that his father, a chemical engineer, had bought for him. He shut himself in his room and devoted his young life to coding the Northstar. It ran at 4 MHz and had only 16 Kbytes of RAM, but somehow he coaxed a driving game out of the machine.
Though he didn't study or do much homework over the next seven years, Thrun ended up graduating near the top of his high school class. He wasn't sure what was next. He figured he'd think about it during his mandatory two-year stint in the German army. But on Juneé15, 1986 - the last day to apply for university admissions - military authorities told him he wouldn't be needed that year. Two hours later, he arrived at the centralized admission headquarters in Dortmund with only 20 minutes to file his application. The woman behind the counter asked him what he wanted to study - in Germany, students declare a major before arriving on campus. He looked down the list of options: law, medicine, engineering, and computer science. Though he didn't know much about computer science, he had fond memories of programming his Northstar. "Why not?" he thought, and decided his future by checking the box next to computer science.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Within five years, he was a rising star in the field. After posting perfect scores on his final undergraduate exams, he went on to graduate school at the University of Bonn, where he wrote a paper showing for the first time how a robotic cart, in motion, could balance a pole. It revealed an instinct for creating robots that taught themselves. He went on to code a bot that mapped obstacles in a nursing home and then alerted its elderly user to dangers. He programmed robots that slithered into abandoned mines and came back hours later with detailed maps of the interior. Roboticists in the US began to take note.Carnegie Mellon offered the 31-year-old a faculty position and then gave him an endowed chair. But he still hadn't found a research area to focus all his energy and skills on.
While Thrun was settling in at CMU, the hot topic in robotics was self-driving cars. The field was led by Ernst Dickmanns, a professor of aerospace technology at the University of the Bundeswehr. He liked to point out that planes had been flying themselves since the 1970s. The public was clearly willing to accept being flown by autopilot, but nobody had tried the same on the ground. Dickmanns decided to do something about that.
With help from the German military and Daimler-Benz, he spent seven years retrofitting a boxy Mercedes van, equipping it with video cameras and a bunch of early Intel processors. On a Daimler-Benz test track in December 1986, the driverless van accelerated to 20 miles per hour and, using data supplied by the videocams, successfully stayed on a curving road. Though generally forgotten, this was the Kitty Hawk moment of autonomous driving.
It sparked a 10-year international dash to develop self-driving cars that could navigate city streets and freeways. In the US, engineers at Carnegie Mellon led the charge with funding from the Army. On both sides of the Atlantic, the approach involved a data-intensive classification approach, a so-called rule-based system. The researchers assembled a list of easily identifiable objects (solid white lines, dotted white lines, trees, boulders) and told the car what to do when it encountered them. Before long, though, two main problems emerged. First, processing power was anemic, so the vehicle's computer quickly became overwhelmed when confronted with too much data (a boulder beside a tree, for instance). The car would slow to a crawl while trying to apply all the rules. Second, the team couldn't code for every combination of conditions. The real world of streets, intersections, alleys, and highways was too complex.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 1991, a CMU computer science PhD student named Dean Pomerleau had a critical insight. The best way to teach cars to drive, he suspected, was to have them learn from the experts: humans. He got behind the wheel of CMU's sensor-covered, self-driving Humvee, flipped on all the computers, and ran a program that tracked his reactions as he sped down a freeway in Pittsburgh. In minutes, the computers had developed algorithms that codified Pomerleau's driving decisions. He then let the Humvee take over. It calmly maneuvered itself on Pittsburgh's interstates at 55 miles per hour.
Everything worked perfectly until Pomerleau got to a bridge. The Humvee swerved dangerously, and he was forced to grab the wheel. It took him weeks of analyzing the data to figure out what had gone wrong: When he was "teaching" the car to drive, he had been on roads with grass alongside them. The computer had determined that this was among the most important factors in staying on the road: Keep the grass at a certain distance and all will be well. When the grass suddenly disappeared, the computer panicked.
It was a fundamental problem. In the mid-'90s, microchips weren't fast enough to process all the potential options, especially not at 55 miles per hour. In 1996, Dickmanns proclaimed that real-world autonomous driving could "only be realized with the increase in computer performance … With Moore's law still valid, this means a time period of more than one decade." He was right, and everyone knew it. Research funding dried up, programs shut down, and autonomous driving receded back to the future.
Eight years later, when Darpa held its first Grand Challenge, processors had in fact become 25 times faster, outpacing Moore's law. Highly accurate GPS instruments had also become widely available. Laser sensors were more reliable and less expensive. Most of the conditions Dickmanns had said were necessary had been met or exceeded. More than 100 contestants signed up, including a resurgent CMU squad. Darpa officials couldn't hide their excitement. The breakthrough moment in autonomous driving was, they thought, at hand. In truth, some of the field's biggest challenges had yet to be overcome.
Once Thrun decided to take a crack at the second Grand Challenge, he found himself consumed by the project. It was as though he were 12 again, shut up in his room, coding driving games. But this time a Northstar home computer wasn't going to cut it. He needed serious hardware and a sturdy vehicle.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's when he got a call from Cedric Dupont, a scientist at Volkswagen's Electronics Research Laboratory, just a few miles from the Stanford campus. The Volkswagen researchers wanted in on the Grand Challenge. They'd heard that Thrun was planning to enter the event, and they offered him three Touaregs - one to race, another as a backup, and a third for spare parts. The VW lab would outfit them with steering, acceleration, and braking control systems custom-built to link to Thrun's computers. Thrun had his vehicle, and Volkswagen executives had a chance to be part of automotive history.
It was history, however, that Red Whittaker planned on writing himself. Whittaker, the imposing, bald, bombastic chief of CMU's eponymously named Red Team, had been working on self-driving vehicles since the '80s. Whittaker's approach to problem solving was to use as much technological and automotive firepower as possible. Until now, the firepower hadn't been enough. This time, he would make sure that it was.
First, he entered two vehicles in the race: a 1986 Humvee and a 1999 Hummer. Both were chosen for their ruggedness. Whittaker also stabilized the sensors on the trucks with gyroscopes to ensure more reliable data. Then he sent three men in a laser-studded, ground-scanning truck into the desert for 28 days. Their mission: create a digital map of the race area's topography. The team logged 2,000 miles and built a detailed model of the desolate sagebrush expanses of the Mojave.
That was only the beginning. The Red Team purchased high-resolution satellite imagery of the desert and, when Darpa revealed the course on race day, Whittaker had 12 analysts in a tent beside the start line scrutinize the terrain. The analysts identified boulders, fence posts, and ditches so that the two vehicles would not have to wonder whether a fence was a fence. Humans would have already coded it into the map.
The CMU team also used Pomerleau's approach. They drove their Humvees through as many different types of desert terrain as they could find in an attempt to teach the vehicles how to handle varied environments. Both SUVs boasted seven Intel M processors and 40 Gbytes of flash memory - enough to store a world road atlas. CMU had a budget of $3 million. Given enough time, manpower, and access to the course, the CMU team could prepare their vehicles for any environment and drive safely through it.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It didn't cut it. Despite that 28-day, 2,000-mile sojourn in the desert, CMU's premapping operation overlapped with only 2 percent of the actual race course. The vehicles had to rely on their desert training sessions. But even those didn't fully deliver. A robot might, for example, learn what a tumbleweed looks like at 10 am, but with the movement of the sun and changing shadows, it might mistake that same tumbleweed for a boulder later in the day.
Thrun faced these same problems. Small bumps would rattle the Touareg's sensors, causing the onboard computer to swerve away from an imagined boulder. It couldn't distinguish between sensor error, new terrain, its own shadow, and the actual state of the road. The robot just wasn't smart enough.
And then, as Thrun sat on the side of that rutted dirt road, an idea came to him. Maybe the problem was a lot simpler than everyone had been making it out to be. To date, cars had not critically assessed the data their sensors gathered. Researchers had instead devoted themselves to improving the quality of that data, either by stabilizing cameras, lasers, and radar with gyroscopes or by improving the software that interpreted the sensor data. Thrun realized that if cars were going to get smarter, they needed to appreciate how incomplete and ambiguous perception can be. They needed the algorithmic equivalent of self-awareness.
Together with Montemerlo, his lead programmer, Thrun set about recoding Stanley's brain. They asked the computer to assess each pixel of data generated by the sensors and then assign it an accuracy value based on how a human drove the car through the desert. Rather than logging the identifying characteristics of the terrain, the computer was told to observe how its interpretation of the road either conformed to or varied from the way a human drove. The robot began to discard information it had previously accepted - it realized, for instance, that the bouncing of its sensors was just turbulence and did not indicate the sudden appearance of a boulder. It started to ignore shadows and accelerated along roads it had once perceived as being crisscrossed with ditches. Stanley began to drive like a human.
Thrun decided to take the car's newfound understanding of the world a step further. Stanley was equipped with two main types of sensors: laser range finders and video cameras. The lasers were good at sensing ground within 30 meters of the car, but beyond that the data quality deteriorated. The video camera was good at looking farther away but was less accurate in the foreground. Maybe, Thrun thought, the laser's findings could inform how the computer interpreted the faraway video. If the laser identified drivable road, it could ask the video to search for similar patterns ahead. In other words, the computer could teach itself.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It worked. Stanley's vision extended far down the road now, allowing it to steer confidently at speeds of up to 45 miles per hour on dirt roads in the desert. And because of its ability to question its own data, the accuracy of Stanley's perception improved by four orders of magnitude. Before the recoding, Stanley incorrectly identified objects 12 percent of the time. After the recoding, the error rate dropped to 1 in 50,000.
It's half past 6 in the morning on October 8, 2005, outside of Primm, Nevada. Twenty-three vehicles are here for the second Grand Challenge. Festooned with corporate logos, lasers, radars, GPS transponders, and video cameras, they're parked on the edge of the gray-brown desert and ready to roll. The early morning light clashes with the garish glow of the nearby Buffalo Bill's Resort and Casino.
Red Whittaker is beaming. His 12 terrain analysts have completed their two-hour premapping of the route, and the data has been uploaded to the two CMU vehicles via a USB flash drive. The stakes are high this year: Darpa has doubled the prize money to $2 million, and Whittaker is ready to win it and erase the memory of the 2004 debacle. Last night, he pointed out to the press that Thrun had been a junior faculty member in Whittaker's robotics lab at CMU. "My DNA is all over this race," he boasted. Thrun won't be baited by Whittaker's grandstanding. He focuses on trying to calm his own frayed nerves.
The race begins quietly: One by one, the vehicles drive off into the hills. A few hours later, the critical moment is captured in grainy footage. CMU's H1 is in the middle of a dusty white desert expanse. The camera slowly approaches - the image is pixelated and overexposed. It's the view from Stanley's rooftop camera. For the past 100 miles, the Touareg has been tailgating the H1, and now it pulls close. Its lasers scan the exterior of its competitor, revealing a ghostly green outline of side panels and a giant, sensor-stabilizing gyroscope. And then the VW rotates its steering wheel and passes.
Darpa has imposed speed limits of 5 to 25 miles per hour, depending on conditions. Stanley wants to go faster. Its lasers are constantly teaching its video cameras how to identify drivable terrain, and it knows that it could accelerate more. For the rest of the race, Stanley pushes up against the speed limits as it navigates through open desert and curving mountain roads. After six hours of driving, it exits the final mountain pass ahead of every other team. When Stanley crosses the finish line, Thrun catches his first sight of an undiscovered country, a place where robots do all the driving.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The 128-mile race is a success. Four other vehicles, including both of CMU's entries, complete the course behind Stanley. The message is clear: Autonomous vehicles have arrived, and Stanley is their prophet. "This is a watershed moment - much more so than Deep Blue versus Kasparov," says Justin Rattner, Intel's R&D director. "Deep Blue was just processing power. It didn't think. Stanley thinks. We've moved away from rule-based thinking in artificial intelligence. The new paradigm is based on probabilities. It's based on statistical analysis of patterns. It is a better reflection of how our minds work." The breakthrough comes just as carmakers are embracing a host of self-driving technologies, many of them barely recognizable as robotic. Take, for example, a new feature known as adaptive cruise control, which allows the driver to select the distance to be maintained between the vehicle and the car in front of it. On the Toyota Sienna minivan, this is simply another button on the steering wheel. What that button represents, however, is a laser that surveys the distance to the vehicle ahead of it. The minivan's computer interprets the data and then controls the acceleration and braking to keep the distance constant. The computer has, in essence, taken over part of the driving.
But even as vehicles are being produced with sensors that perceive the world, they have, until now, lacked the intelligence to comprehensively interpret what they see. Thanks to Thrun, that problem is being solved. Computers are nearly ready to take the wheel. But are humans ready to let them? Jay Gowdy doesn't think so. A highly regarded roboticist, he has worked for nearly two decades to build self-driving cars, first with CMU and, more recently, with SAIC, a Fortune 500 defense contractor. He notes that in the US, about 43,000 people die in traffic accidents every year. Robot-driven cars would radically reduce the number of fatalities, he says, but there would still be accidents, and those deaths would be attributable to computer error. "The perception is that in the majority of accidents today, those who die are drunk, lazy, or stupid and bring it on themselves," Gowdy says. "If computers take over the driving, any deaths are likely to be perceived as the loss of people who did nothing wrong." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The resulting liability issues are a major hurdle. If a robotically driven car gets in an accident, who is to blame? If a software bug causes a car to swerve off the road, should the programmer be sued, or the manufacturer? Or is the accident victim at fault for accepting the driving decisions of the onboard computer? Would Ford or GM be to blame for selling a "faulty" product, even if, in the larger view, that product reduced traffic deaths by tens of thousands? This morass of liability questions would need to be addressed before robot cars could be practical. And even then, Americans would have to be willing to give up control of the steering wheel.
Which is not something they're likely to do, even if it means saving 40,000 lives a year. So the challenge for carmakers will be to develop interfaces that make people feel like they're in control even when the car is really doing most of the thinking. In other words, that small adaptive cruise control button in Toyota's minivan is a Trojan horse.
"OK, we're two of two, two of two, and one of one, no U-turn, speed advisory 25, large divider, POI gas station on left." Michael Loconte and Bill Wong are creeping through a quiet suburb just north of San Jose, California. They are driving a white Ford Taurus with a 6-inch antenna on the roof. Loconte wears a headset and mumbles coded descriptions of theésurroundings into the microphone - "two of two" means that he's in the right lane on a street with two lanes, and "POI" means point of interest. Wong scribbles with a digital pen, making landmark and street address notations on a scrolling map. "People think we're with the CIA," Loconte says. "I know it kind of looks like that." But they aren't spies. They're field analysts working for the GPS mapping company Navteq, and they're laying the foundation for the future of driving. On this Friday afternoon, they're doing a huge commercial extension of CMU's ditch-and-fence mapping operation. Navteq has 500 such analysts driving US neighborhoods, mapping them foot by foot. Though Thrun has proven that extensive mapping isn't needed to get from A to B, maps are critical when it comes to communicating with robotic vehicles. As automotive engineers build cars with increasing autonomy, the human interface with the vehicle will migrate from the steering wheel to the map. Instead of turning a wheel, drivers will make decisions by touching destinations on an interactive display.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "We want to move up the food chain," says Bob Denaro, Navteq's VP of business development. The company sees itself moving beyond the help-me-I'm-lost gizmo business and into the center of the new driving experience. That's not to say that the steering wheel will disappear; it will just be gradually de-emphasized. We will continue to sit in the driver's seat and have the option of intervening if we choose. As Denaro notes: "A person's role in the car is changing. People will become more planners than drivers." And why not - since the car is going to be a better driver than a human anyway. With the addition of map information, a car will know the angle of a turn that's still 300 feet away. Navteq is in the process of collecting slope information, road width, and speed limits - all things that bathe the vehicle in more data than a human could ever handle.
Denaro believes that the key to making people comfortable with the shift from driver to planner will be the same thing that made pilots comfortable accepting autopilot in the cockpit: situational awareness. If a robot simply says it wants to go left instead of right, we feel uncomfortable. But if a map showed a traffic jam to the right and the machine listed reasons for rerouting, then we would have no problem pressing the Accept Route Change icon. We feel like we are still in control.
"Autopilot in the cockpit greatly extended the pilots' skills," Denaro says. Automation in driving will do the same thing.
Sebastian Thrun is standing in front of about a hundred of his colleagues and teammates at a winery overlooking Silicon Valley. He has a glass of champagne in one hand and a microphone in the other, and everyone is in a festive mood. Darpa just gave Stanford a $2 million check for winning the desert race, and Thrun is going to use a portion of the money to endow the Stanley fellowship for graduate students in computer science.
"Some people refer to us as the Wright brothers," he says, holding up his champagne. "But I prefer to think of us as Charles Lindbergh, because he was better-looking." Everyone laughs and toasts to that. "A year ago, people said this couldn't be done," Thrun continues. "Now everything is possible." There is more applause, and then the AI experts, programmers, and engineers take small, conservative sips of the champagne. The drive home is curvy and dark. If only the party were happening in Thrun's future - then the champagne could flow unimpeded and the cars would take everyone safely home.
The SUV's hard drives boot up, its censors come to life, and it's ready to roll. Here's how Stanley works.
- J.D.
1. GPS antenna The rooftop GPS antenna receives data that has actually traveled twice into space - once to receive an initial position that is accurate up to a meter, and a second time to make corrections. The final reading is accurate up to 1 centimeter.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg 2. Laser Range Finder So-called lidar scans the terrain 30 meters ahead and to either side of the grill five times a second. The data is used to build a map of the road.
3. Video camera The video camera scans the road beyond the lidar's range and pipes the data back to the computer. If the lasers have identified drivable ground, software looks for the same characteristics in the video data, extending Stanley's vision to 80 meters and permitting safe acceleration.
4. Odometry To contend signals blocked by, say, a tunnel or mountain, a photo sensor in the wheel well monitors a pattern imprinted on Stanley's wheels. The data is used to determine how far Stanley has moved since the blackout. The onboard computer can then track the vehicle's position based on its last known GPS location.
Seven ways today's cars are already robots.
- Brian Lam 1. Road Condition Reporting When a car using BMW's hazard system slips on ice, its sensors activate traction control. Meantime, wireless technology alerts other cars in the area to the hazard.
2. Adaptive Cruise Control Luxury cars made by Audi, BMW, Infiniti, and others now use radar-guided cruise control to keep pace with the car ahead.
3. Omnidirectional Collision System GM has built an inexpensive collision detection system that allows GPS-equipped cars to identify each other and communicate wirelessly.
4. Lane-Departure Prevention Nissan has a prototype that uses cameras and software to detect white lines and reflective markers. If the system determines the vehicle is drifting, it will steer the car back into the proper lane.
5. Auto Parallel Park Toyota has a technology that uses a camera to identify a curbside parking space and turns the wheel automatically to reverse you into the spot.
6. Blind-Spot Sensors GM's GPS-based collision detectors can warn you when another car enters your blind spot.
7. Corner Speed An experimental Honda navigation computer anticipates upcoming turns and, if necessary, slows the vehicle to match predetermined safe speeds.
Contributing editor Joshua Davis ([email protected]) is the author of The Underdog.
He wrote about DVD bootlegging in issue 13.10.
credit Ian White Stanley: The Stanford Racing Teamés autonomous vehicle is a modified Volkswagen Touareg that can scan any terrain and pick out a drivable course to a preset destination. Cup holders optional.
credit Joe Pugliese Team Stanley: From left, Sven Strohband, Sebastian Thrun, David Stavens, Hendrik Dahlkamp, Mike Montemerlo.
credit Jesse Jensen credit Jameson Simpson Feature: Say Hello to Stanley Plus: How Stanley Sees the Road Taking the Wheel Topics magazine-14.01 Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
742 | 2,018 | "Zoox Flashes Serious Self-Driving Skills in Chaotic San Francisco | WIRED" | "https://www.wired.com/story/zoox-self-driving-car-video-san-francisco" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation Zoox Flashes Serious Self-Driving Skills in Chaotic San Francisco Zoox has shown its system can handle some of San Francisco's toughest driving situations, but the real proof of a self-driving system doesn’t fit onto a highlight reel.
Zoox Save this story Save Save this story Save San Francisco has some of the country’s worst traffic. The lights always feel out of sync. The pavement is riddled with potholes. And pedestrians, cyclists, one-wheelers, and scooter-ers spill into the streets like the fog descending from the hills. It is, in all, a horrific place to drive. And for the same reasons, it’s a tremendous place to teach a car to drive itself.
To borrow a phrase from a rival city, if your robot can make it here, it can make it anywhere.
That’s why Zoox, a much-hyped self-driving car startup based in Silicon Valley, does much of its testing in the Financial District and North Beach, two of the city’s most vexing neighborhoods. In a three-minute video shared exclusively with WIRED, we see the view from one of Zoox’s test cars, a Toyota Highlander SUV retrofitted with its sensors and computing systems, face down some of San Francisco’s gnarliest thoroughfares and San Franciscans’ most befuddling moves. The vehicle scoots around double-parked cars, makes left turns across traffic, and safely slides between hordes of pedestrians. It does it at night, in the rain, and on hills so steep, you can hardly see the intersection up ahead.
“We’re handling the spectrum of complicated situations you need to drive in in a city like San Francisco,” says Jesse Levinson, Zoox’s CTO. “We have built the software and hardware frameworks that can handle this.” [#video: https://www.youtube.com/embed/868tExoVdQw&feature=youtu.be This is a big deal. Where most robo-car developers, like Waymo or Uber, are partnering with traditional automakers like Jaguar Land Rover and Volvo to design their vehicles and companies like Lyft to put them into operation , Zoox intends to run the entire operation solo. The company will run its own ride-hailing service, using a vehicle it’s designing itself (the current iteration looks like a long wheelbase golf cart festooned with wires).
Levinson has been working on self-driving car technology since the mid-2000s. When he was a grad student at Stanford, he helped the university’s team take second place in Darpa’s 2007 Urban Challenge, a foundational event for the self-driving industry. In 2014, he co-founded Zoox with Tim Kentley-Klay, and has raised $800 million to date, according to Bloomberg , but the company only started driving in the city about a year ago. “We’ve been able to conquer some difficult terrain really quickly,” he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Zoox team (about 500-strong, many pulled from places like Tesla, Nvidia, and NASA) has expanded the territory its cars cover and is steadily making its software more efficient, to reduce the computer-power any car needs to drive safely. They’ve applied machine learning to teach that software how to identify cyclists, pedestrians, and other actors, and predict how they’re likely to behave. They’ve worked to prioritize safety but value efficiency. “Part of driving well in a city is not being so conservative that you don’t move,” Levinson says. “People are just gonna honk at you, and you won’t have a product.” The current iteration of Zoox's custom vehicle looks like a long wheelbase golf cart festooned with wires, and can drive in either direction.
Zoox The car’s victory over sundry obstacles—six-way intersections, GPS-blocking tunnels, highway on-ramps—indicates Zoox is well on its way to having that product. But, as with all self-driving tech demos, this video needs to be consumed alongside a few grains of salt. It shows about nine minutes worth of driving (shown at 3x speed), a tiny and select sample size. It doesn’t reveal how smoothly the car drives. One self-driving expert who reviewed the video (and asked not to be named discussing a competitor) noted the car does not seem to spot and classify every single pedestrian. Overall, though, the car looks good. “It’s on par with the state of the art,” says Matthew Johnson-Roberson, who co-directs the University of Michigan Ford Center for Autonomous Vehicles and watched the video.
“It is an impressive set of scenarios and the vehicle does exhibit a very sophisticated level of ‘driving skills’ in this video,” says Huei Peng, who studies autonomous vehicles at the University of Michigan. But there’s a big difference between handling a sticky spot once and handling it every single time, in every combination of all the variables. The real proof of a self-driving system doesn’t fit onto a highlight reel. It’s based on millions of miles and thousands of hours of driving.
Levinson says as much, and while he isn’t yet ready to give a broader look at how his system performs, numbers Zoox filed with the California DMV this winter show its system made major progress in how far it could drive between human safety driver takeovers in the second half of the year. That needs to be proved over time, Levinson says, before Zoox will feel it’s ready to launch its service. “You can’t have service if you accidentally hit something once in a while.” (Based on public reports logging autonomous vehicle incidents in California, Zoox’s cars have never caused a crash.) But one thing is clear from the video. San Francisco is a wild place to learn to drive. And if you can master its mean, meandering streets, you’re ready to be a pro driver.
The ultimate carbon-saving tip? Travel by cargo ship Laser-shooting planes uncover the horror of WWI The Pentagon's dream team of tech-savvy soldiers PHOTO ESSAY: The annual super-celebration in Superman's real-world home It’s time you learned about quantum computing Get even more of our inside scoops with our weekly Backchannel newsletter Senior Associate Editor Facebook X Instagram Topics Self-Driving Cars machine learning Aarian Marshall Aarian Marshall Lauren Smiley Aarian Marshall WIRED Staff Paresh Dave Andy Greenberg Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
743 | 2,019 | "GM’s Cruise Rolls Back Its Target for Self-Driving Cars | WIRED" | "https://www.wired.com/story/gms-cruise-rolls-back-target-self-driving-cars" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation GM’s Cruise Rolls Back Its Target for Self-Driving Cars After years of testing in San Francisco, Cruise has confirmed that it will launch its robo-taxi business in the city. When that will happen though, it won’t say.
Elijah Nouvelage/REUTERS Save this story Save Save this story Save Cruise, the startup General Motors acquired to develop its self-driving car , will launch an autonomous taxi service on the gnarly, crowded streets of San Francisco, CEO Dan Ammann said Wednesday. It will not, however, do so by the end of this year, the deadline it set for itself in 2017. Instead, Cruise will spend the rest of 2019 expanding its tests across the city and working on the less technical aspects of running such a service, from charging its electric cars to working with regulators to soothing a public that may be wary of robots roaming the roads.
The revised timeline underlines a reality that many industry prognosticators have resisted. Creating a self-driving car that can move through a city safely, reliably, and efficiently is a punishing task.
Nobody has managed it.
Nobody knows how much time, money, and manpower they’ll need. And with a technology where mistakes can easily turn deadly , nobody wants to move ahead until they’re confident they can do it without risking their reputation and financial well-being. When Waymo launched its service last December to meet its own deadline, it kept human backups behind the wheel—an underwhelming result after a decade of work.
“Anytime that you’re working on something that’s never been done before, it’s not surprising if timelines move around,” Ammann says. “If we do it right from the outset, that’s what will allow us to scale it up rapidly.” At this point, that means delaying the outset to an unspecified time.
Ammann took the Cruise CEO job at the end of last year, leaving his post as GM president to guide the outfit. (Cruise founder and former CEO Kyle Vogt is now the CTO.) Much of his work has been focused on stockpiling the cash he believes this effort will take. In the past year, Cruise has raised $7.25 billion , counting Softbank and Honda as major investors.
It now has 1,500 employees—nearly 40 people for each of the 40 employees it had at the time of GM’s acquisition, in March 2016. It has built up a network of fast chargers around San Francisco to reenergize its cars’ batteries, and plans to build more. And while the secrecy surrounding this nascent industry makes it hard to know who’s leading the pack, those stats suggest that Cruise is one of the few players—along with Waymo, Argo, Uber, and a handful of others—positioned to deliver something as complex as a robo-taxi service.
To this growing pile of money, people, and plugs, Ammann is adding miles. Cruise will significantly expand its tests in San Francisco. That means increasing (without offering specifics) its test fleet of sensor-clad Chevy Bolts , which currently numbers about 180, according to VentureBeat.
It also will mean keeping the cars, and their human safety operators, on the streets more of the time. “You’re going to see a lot more of them, doing a lot more miles,” Ammann says. And while the AV industry has moved away from counting miles driven as a metric for progress, Ammann says that you can’t drive in SF without encountering, and learning from, new situations. “Every mile is an adventure.” Cruise set the end of 2019 deadline for a commercial launch in November 2017 at an event for investors. Ammann, however, says he’s not concerned about shifting the timeline. “We’ll be gated by safety,” he says. Meaning, better to get it right, right from the start.
It’s not surprising that Cruise hasn’t specified a new deadline. Many startups are going after limited visions of autonomous vehicles, focusing on trucking , or short distance shuttles , or moving food through the bike lane.
Cruise, like Waymo , remains focused on the marquee goal of a driverless taxi that anyone can summon on their smartphone and rely on for a ride. (Same goes for Ford , which is working with AV startup Argo and targeting 2021 for its service.) Cruise hasn’t taken its sights off that golden ring—it just can’t tell you when it will make the grab.
The hard-luck Texas town that bet on bitcoin—and lost The inside story of Twitter's new redesign How Waze data can help predict car crashes The simple way Apple and Google let abusers stalk victims Disney's new Lion King is the VR-fueled future of cinema 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Associate Editor Facebook X Instagram Topics Self-Driving Cars General Motors Autonomous Vehicles Gregory Barber Amit Katwala Simon Hill Andy Greenberg Khari Johnson Will Knight Amit Katwala Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
744 | 2,018 | "How Self-Driving Supergroup Aurora Is Making Self-Driving Cars | WIRED" | "https://www.wired.com/story/aurora-self-driving-cars-plan" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Alex Davies Transportation How Self-Driving Supergroup Aurora Plans to Make Robocars Real In the next few months, Aurora's self-driving cars should be nearly “feature complete”—capable of doing everything a human driver can, if with less skill, says CEO Chris Urmson.
Aurora Save this story Save Save this story Save The Traveling Wilburys were a short-lived phenomenon. From 1988 to 1991, Bob Dylan, George Harrison, Jeff Lynne, Roy Orbison, and Tom Petty—each a star in their own right and with a robust catalog to their name—combined their talents and experiences to produce two albums. That’s 21 songs in 112 delightful minutes of music, a testament to the power of collaboration.
Just about a decade into the race to develop self-driving cars , this young industry has its own supergroup: Aurora Innovation, formed by three of the biggest names in the field and veterans of its highest-profile efforts. At the end of 2016, Chris Urmson, Drew Bagnell, and Sterling Anderson created the startup to deliver fully self-driving technology —no human involvement—and will start with operations in geofenced areas (somewhere), slowly expanding as the cars prove themselves.
The trio’s experience runs deep. After helping lead Carnegie Mellon’s efforts in Darpa’s Grand Challenges , Urmson became a founding member of Google’s self-driving team, which he ran until 2016. Anderson worked on the tech at MIT before bringing his talents to bear on Tesla’s Autopilot system.
Bagnell, another CMU alum, is a machine learning expert who helped build Uber’s autonomy effort.
They entered a self-driving industry big on promises.
Waymo (which started as Google’s project) says it will deploy its cars in a commercial service by the end of this year. General Motors is targeting 2019.
Zoox, a secretive startup that has raised $800 million , is looking at 2020. Ford has promised large fleets of autonomous vehicles come 2021.
You might expect Aurora’s founders, then, to throw their cumulative experience into an ambitious effort to outrace these more established programs to market, one of those together we can rule the galaxy -type deals. Instead, the zeitgeist at Aurora is one of humility. Urmson, Bagnell, and Sterling haven’t put any hard dates on when it when their tech might be ready. They don’t pitch a grandiose vision of a remade world of mobility. They seem to seek a role as something like a Tier 1 supplier, selling self-driving tech to automakers the way others sell airbags.
That’s easier to understand when you take a closer look at their resumés. Waymo has covered nine million miles but reportedly still has trouble with left turns into traffic. Tesla’s system has attracted the wary eye of the National Transportation Safety Board.
Uber’s car killed a woman in March. After years of hype, the difficulty of making self-driving technology really work seems to have set in.
“I think there’s a lot of people who underappreciate the subtlety and complexity of the problem,” Urmson says. And while he’s never been the boastful type, it’s quite a change from 2015, when he said his goal was to make sure his 11-year-old son would never need a driver’s license—an objective he hasn't brought up lately.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Aurora hasn’t made much noise in general since starting work in January 2017, apart from announcing partnerships with Volkswagen, Hyundai , and Chinese startup Byton , and raising an impressive but hardly stunning $90 million in funding. But now that it’s looking build up its team (currently about 160-strong), it has published a blog post laying out its approach to robo-driving.
WIRED sat down with Urmson, Aurora's CEO, to go over its key points—including the role of machine learning, measuring progress, and proving safety—and how he and his cofounders are handing their second lap around this track.
In developing this technology, it’s tempting to fall into what Urmson calls “ladder building.” For example, if you’re working on bringing the car to a stop, you want to keep making it smoother and smoother. “You can imagine people spending years making slight changes to the algorithm, tuning the parameters,” Urmson says, making clear he’s speaking from experience. “You feel like you’re making progress. It’s like Wile E. Coyote—your legs are moving real fast, but you’re not actually getting anywhere.” With the chance to start fresh, Aurora is applying machine learning to this problem, which means finding the right way to teach a computer what a good stop looks like. They call this “fueling the rocket.” The results are harder to see than all those new rungs, but once you’ve finished, you can go a lot higher, a lot faster. The flip side is knowing where machine learning isn’t especially helpful, one upside of what Urmson says is his team’s ability to say “We’ve been down this road. That looks really appealing, but it’s not actually gonna get us there. Let’s do this.
” Machine learning is the right tool for teaching a robot to discriminate between an NBA player and an inflatable dancing man.
But if you want to track how that person’s moving, you can fall back on advanced but well understood math. “That’s a very well established field,” Urmson says, thanks to people developing things like ballistic missiles and anti-aircraft weaponry. “If you can come up with a good measure of the error, we can carry that through the math, and get you a really nice, precise output.” Today, Aurora’s cars are driving around Palo Alto and Pittsburgh (the company has offices in each city, as well as one in San Francisco). In the next few months, Urmson says they should be nearly “feature complete”—capable of doing everything a human driver can, if with less skill. After that, he says, it’s a matter of improving each ability.
Urmson’s no fan of the two standard ways of measuring progress: how many miles the cars have driven, and how often their human safety drivers have to take control. “How good are we at seeing traffic lights, or left turn arrows? That’s what we’re looking at for measurement,” he says. “We care about how close we are on each of those features.” One of many looming questions in this space is how to prove to wary regulators that self-driving cars are safe enough to deploy en masse. There’s no real mechanism for doing this—and the particulars will change from city to state to country—but Aurora has a plan in mind.
Urmson breaks the problem into two parts. The first is what happens when something breaks. First, you enumerate potential failure cases—sensors that can break, computers than may crash. Then, you lay out a fix, or response, for each. The car will pull over, it will activate backup systems, it will tell an adult, and so on.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The second bit is ensuring that when everything’s working, it’s working well enough. “That starts to look like a statistical argument,” Urmson says. Something like, We’ve driven by a million pedestrians, and we saw a million of them , or We’ve nailed 2,347,861 left-hand turns.
Combined, these form an estimate of how often the car will fail. “Then we package that up into a document, and we have a conversation with a regulator, and we say, ‘This is why we believe we’re safe. What do you think?’” That sort of technical and political savvy is key, but it may not be what sets Aurora apart from the rest of the field, at least not solely. It’s that sense of humility, the appreciation for just how hard to problem is to crack. So for now, Aurora is focused on completing those features, then perfecting them, knowing from experience that this is hard, and it will take a long time and a lot of hard work. Or, as another supergroup put it: Well, it's all right // We're going to the end of the line.
How to get the most out of Gmail’s new features Phone numbers weren't meant as ID. Now we’re all at risk Inside Puerto Rico's year of fighting for power The bot-strewn history of the best kids' show on Netflix The super-secret sand that makes your phone possible Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Senior Associate Editor Facebook X Instagram Topics Self-Driving Cars machine learning Carlton Reid Amanda Hoover Kate Knibbs Will Knight Steven Levy Caitlin Harrington Will Knight Susan D'Agostino Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
745 | 2,021 | "DALL·E: Creating images from text" | "https://openai.com/blog/dall-e" | "Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Illustration: Justin Jay Wang Research DALL·E: Creating images from text We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
January 5, 2021 Image generation , Transformers , Generative models , DALL·E , GPT-2 , CLIP , Milestone , Publication , Release DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.
See also: DALL·E 2 , which generates more realistic and accurate images with 4x greater resolution.
Text Prompt an illustration of a baby daikon radish in a tutu walking a dog AI Generated images Edit prompt or view more images Text Prompt an armchair in the shape of an avocado. . . .
AI Generated images Edit prompt or view more images Text Prompt a store front that has the word ‘openai’ written on it. . . .
AI Generated images Edit prompt or view more images Text Prompt the exact same cat on the top as a sketch on the bottom AI Generated images Edit prompt or view more images GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks.
Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.
Overview Like GPT-3, DALL·E is a transformer language model. It receives both the text and the image as a single stream of data containing up to 1280 tokens, and is trained using maximum likelihood to generate all of the tokens, one after another.
[^footnote-1] This training procedure allows DALL·E to not only generate an image from scratch, but also to regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt.
We recognize that work involving generative models has the potential for significant, broad societal impacts. In the future, we plan to analyze how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer term ethical challenges implied by this technology.
Capabilities We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language. We illustrate this using a series of interactive visuals in the next section. The samples shown for each caption in the visuals are obtained by taking the top 32 of 512 after reranking with CLIP , but we do not use any manual cherry-picking, aside from the thumbnails and standalone images that appear outside.
[^footnote-2] Controlling attributes We test DALL·E’s ability to modify several of an object’s attributes, as well as the number of times that it appears.
Click to edit text prompt or view more AI-generated images a pentagonal green click. a green clock in the shape of a pentagon.
Text Prompt AI generated images We find that DALL·E can render familiar objects in polygonal shapes that are sometimes unlikely to occur in the real world. For some objects, such as “picture frame” and “plate,” DALL·E can reliably draw the object in any of the polygonal shapes except heptagon. For other objects, such as “manhole cover” and “stop sign,” DALL·E’s success rate for more unusual shapes, such as “pentagon,” is considerably lower.
For several of the visuals in this post, we find that repeating the caption, sometimes with alternative phrasings, improves the consistency of the results.
a cube made of porcupine. a cube with the texture of a porcupine.
Text Prompt AI generated images We find that DALL·E can map the textures of various plants, animals, and other objects onto three dimensional solids. As in the preceding visual, we find that repeating the caption with alternative phrasing improves the consistency of the results.
a collection of glasses is sitting on a table Text Prompt AI generated images We find that DALL·E is able to draw multiple copies of an object when prompted to do so, but is unable to reliably count past three. When prompted to draw nouns for which there are multiple meanings, such as “glasses,” “chips,” and “cups” it sometimes draws both interpretations, depending on the plural form that is used.
Drawing multiple objects Simultaneously controlling multiple objects, their attributes, and their spatial relationships presents a new challenge. For example, consider the phrase “a hedgehog wearing a red hat, yellow gloves, blue shirt, and green pants.” To correctly interpret this sentence, DALL·E must not only correctly compose each piece of apparel with the animal, but also form the associations (hat, red), (gloves, yellow), (shirt, blue), and (pants, green) without mixing them up [^footnote-3] We test DALL·E’s ability to do this for relative positioning, stacking objects, and controlling multiple attributes.
a small red block sitting on a large green block Text Prompt AI generated images We find that DALL·E correctly responds to some types of relative positions, but not others. The choices “sitting on” and “standing in front of” sometimes appear to work, “sitting below,” “standing behind,” “standing left of,” and “standing right of” do not. DALL·E also has a lower success rate when asked to draw a large object sitting on top of a smaller one, when compared to the other way around.
a stack of 3 cubes. a red cube is on the top, sitting on a green cube. the green cube is in the middle, sitting on a blue cube. the blue cube is on the bottom.
Text Prompt AI generated images We find that DALL·E typically generates an image with one or two of the objects having the correct colors. However, only a few samples for each setting tend to have exactly three objects colored precisely as specified.
an emoji of a baby penguin wearing a blue hat, red gloves, green shirt, and yellow pants Text Prompt AI generated images We find that DALL·E typically generates an image with two or three articles of clothing having the correct colors. However, only a few of the samples for each setting tend to have all four articles of clothing with the specified colors.
While DALL·E does offer some level of controllability over the attributes and positions of a small number of objects, the success rate can depend on how the caption is phrased. As more objects are introduced, DALL·E is prone to confusing the associations between the objects and their colors, and the success rate decreases sharply. We also note that DALL·E is brittle with respect to rephrasing of the caption in these scenarios: alternative, semantically equivalent captions often yield no correct interpretations.
Visualizing perspective and three-dimensionality We find that DALL·E also allows for control over the viewpoint of a scene and the 3D style in which a scene is rendered.
an extreme close-up view of a capybara sitting in a field Text Prompt AI generated images We find that DALL·E can draw each of the animals in a variety of different views. Some of these views, such as “aerial view” and “rear view,” require knowledge of the animal’s appearance from unusual angles. Others, such as “extreme close-up view,” require knowledge of the fine-grained details of the animal’s skin or fur.
a capybara made of voxels sitting in a field Text Prompt AI generated images We find that DALL·E is often able to modify the surface of each of the animals according to the chosen 3D style, such as “claymation” and “made of voxels,” and render the scene with plausible shading depending on the location of the sun. The “x-ray” style does not always work reliably, but it shows that DALL·E can sometimes orient the bones within the animal in plausible (though not anatomically correct) configurations.
To push this further, we test DALL·E’s ability to repeatedly draw the head of a well-known figure at each angle from a sequence of equally spaced angles, and find that we can recover a smooth animation of the rotating head.
a photograph of a bust of homer Text Prompt Image Prompt AI generated images We prompt DALL·E with both a caption describing a well-known figure and the top region of an image showing a hat drawn at a particular angle. Then, we ask DALL·E to complete the remaining part of the image given this contextual information. We do this repeatedly, each time rotating the hat a few more degrees, and find that we are able to recover smooth animations of several well-known figures, with each frame respecting the precise specification of angle and ambient lighting.
DALL·E appears to be able to apply some types of optical distortions to scenes, as we see with the options “fisheye lens view” and “a spherical panorama.” This motivated us to explore its ability to generate reflections.
a plain white cube looking at its own reflection in a mirror. a plain white cube gazing at itself in a mirror.
Text Prompt Image Prompt AI generated images We prompt DALL·E with both a caption describing a well-known figure and the top region of an image showing a hat drawn at a particular angle. Then, we ask DALL·E to complete the remaining part of the image given this contextual information. We do this repeatedly, each time rotating the hat a few more degrees, and find that we are able to recover smooth animations of several well-known figures, with each frame respecting the precise specification of angle and ambient lighting.
Visualizing internal and external structure The samples from the “extreme close-up view” and “x-ray” style led us to further explore DALL·E’s ability to render internal structure with cross-sectional views, and external structure with macro photographs.
a cross-section view of a walnut Text Prompt AI generated images We find that DALL·E is able to draw the interiors of several different kinds of objects.
a macro photograph of brain coral Text Prompt AI generated images We find that DALL·E is able to draw the fine-grained external details of several different kinds of objects. These details are only apparent when the object is viewed up close.
Inferring contextual details The task of translating text to images is underspecified: a single caption generally corresponds to an infinitude of plausible images, so the image is not uniquely determined. For instance, consider the caption “a painting of a capybara sitting on a field at sunrise.” Depending on the orientation of the capybara, it may be necessary to draw a shadow, though this detail is never mentioned explicitly. We explore DALL·E’s ability to resolve underspecification in three cases: changing style, setting, and time; drawing the same object in a variety of different situations; and generating an image of an object with specific text written on it.
a painting of a capybara sitting in a field at sunrise Text Prompt AI generated images We find that DALL·E is able to render the same scene in a variety of different styles, and can adapt the lighting, shadows, and environment based on the time of day or season.
a stained glass window with an image of a blue strawberry Text Prompt AI generated images We find that DALL·E is able to flexibly adapt the representation of the object based on the medium on which it is being drawn. For “a mural,” “a soda can,” and “a teacup,” DALL·E must change how it draws the object based on the angle and curvature of the drawing surface. For “a stained glass window” and “a neon sign,” it must alter the appearance of the object from how it usually appears.
a store front that has the word ‘openai’ written on it. a store front that has the word ‘openai’ written on it. a store front that has the word ‘openai’ written on it. ‘openai’ store front.
Text Prompt AI generated images We find that DALL·E is able to draw the fine-grained external details of several different kinds of objects. These details are only apparent when the object is viewed up close.
With varying degrees of reliability, DALL·E provides access to a subset of the capabilities of a 3D rendering engine via natural language. It can independently control the attributes of a small number of objects, and to a limited extent, how many there are, and how they are arranged with respect to one another. It can also control the location and angle from which a scene is rendered, and can generate known objects in compliance with precise specifications of angle and lighting conditions.
Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to “fill in the blanks” when the caption implies that the image must contain a certain detail that is not explicitly stated.
Applications of preceding capabilities Next, we explore the use of the preceding capabilities for fashion and interior design.
a male mannequin dressed in an orange and black flannel shirt Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to render male mannequins in a variety of different outfits. When prompted with two colors, e.g., “an orange and white bomber jacket” and “an orange and black turtleneck sweater,” DALL·E often exhibits a range of possibilities for how both colors can be used for the same article of clothing.
DALL·E also seems to occasionally confuse less common colors with other neighboring shades. For example, when prompted to draw clothes in “navy,” DALL·E sometimes uses lighter shades of blue, or shades very close to black. Similarly, DALL·E sometimes confuses “olive” with shades of brown or brighter shades of green.
a female mannequin dressed in a black leather jacket and gold pleated skirt Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to render female mannequins in a variety of different outfits. We find that DALL·E is able to portray unique textures such as the sheen of a “black leather jacket” and “gold” skirts and leggings. As before, we see that DALL·E occasionally confuses less common colors, such as “navy” and “olive,” with other neighboring shades.
a living room with two white armchairs and a painting of the colosseum. the painting is mounted above a modern fireplace.
Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to generate images of rooms with several details specified. We find that it can generate paintings of a wide range of different subjects, including real-world locations such as “the colosseum” and fictional characters like “yoda.” For each subject, DALL·E exhibits a variety of interpretations. While the painting is almost always present in the scene, DALL·E sometimes fails to draw the fireplace or the correct number of armchairs.
a loft bedroom with a white bed next to a nightstand. there is a fish tank beside the bed.
Text Prompt Image Prompt AI generated images We explore DALL·E’s ability to generate bedrooms with several details specified. Despite the fact that we do not tell DALL·E what should go on top of the nightstand or shelf beside the bed, we find that it sometimes decides to place the other specified object on top. As before, we see that it often fails to draw one or more of the specified objects.
Combining unrelated concepts The compositional nature of language allows us to put together concepts to describe both real and imaginary things. We find that DALL·E also has the ability to combine disparate ideas to synthesize objects, some of which are unlikely to exist in the real world. We explore this ability in two instances: transferring qualities from various concepts to animals, and designing products by taking inspiration from unrelated concepts.
a snail made of harp. a snail with the texture of a harp.
Text Prompt AI generated images We find that DALL·E can generate animals synthesized from a variety of concepts, including musical instruments, foods, and household items. While not always successful, we find that DALL·E sometimes takes the forms of the two objects into consideration when determining how to combine them. For example, when prompted to draw “a snail made of harp,” it sometimes relates the pillar of the harp to the spiral of the snail’s shell.
In a previous section, we saw that as more objects are introduced into the scene, DALL·E is liable to confuse the associations between the objects and their specified attributes. Here, we see a different sort of failure mode: sometimes, rather than binding some attribute of the specified concept (say, “a faucet”) to the animal (say, “a snail”), DALL·E just draws the two as separate items.
an armchair in the shape of an avocado. an armchair imitating an avocado.
Text Prompt AI generated images In the preceding visual, we explored DALL·E’s ability to generate fantastical objects by combining two unrelated ideas. Here, we explore its ability to take inspiration from an unrelated idea while respecting the form of the thing being designed, ideally producing an object that appears to be practically functional. We found that prompting DALL·E with the phrases “in the shape of,” “in the form of,” and “in the style of” gives it the ability to do this.
When generating some of these objects, such as “an armchair in the shape of an avocado”, DALL·E appears to relate the shape of a half avocado to the back of the chair, and the pit of the avocado to the cushion. We find that DALL·E is susceptible to the same kinds of mistakes mentioned in the previous visual.
Animal illustrations In the previous section, we explored DALL·E’s ability to combine unrelated concepts when generating images of real-world objects. Here, we explore this ability in the context of art, for three kinds of illustrations: anthropomorphized versions of animals and objects, animal chimeras, and emojis.
an illustration of a baby daikon radish in a tutu walking a dog Text Prompt AI generated images We find that DALL·E is sometimes able to transfer some human activities and articles of clothing to animals and inanimate objects, such as food items. We include “pikachu” and “wielding a blue lightsaber” to explore DALL·E’s ability to incorporate popular media.
We find it interesting how DALL·E adapts human body parts onto animals. For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL·E often draws the kerchief, hands, and feet in plausible locations.
a professional high quality illustration of a giraffe turtle chimera. a giraffe imitating a turtle. a giraffe made of turtle.
Text Prompt AI generated images We find that DALL·E is sometimes able to combine distinct animals in plausible ways. We include “pikachu” to explore DALL·E’s ability to incorporate knowledge of popular media, and “robot” to explore its ability to generate animal cyborgs. Generally, the features of the second animal mentioned in the caption tend to be dominant.
We also find that inserting the phrase “professional high quality” before “illustration” and “emoji” sometimes improves the quality and consistency of the results.
a professional high quality emoji of a lovestruck cup of boba Text Prompt AI generated images We find that DALL·E is sometimes able to combine distinct animals in plausible ways. We include “pikachu” to explore DALL·E’s ability to incorporate knowledge of popular media, and “robot” to explore its ability to generate animal cyborgs. Generally, the features of the second animal mentioned in the caption tend to be dominant.
We also find that inserting the phrase “professional high quality” before “illustration” and “emoji” sometimes improves the quality and consistency of the results.
Zero-shot visual reasoning GPT-3 can be instructed to perform many kinds of tasks solely from a description and a cue to generate the answer supplied in its prompt, without any additional training. For example, when prompted with the phrase “here is the sentence ‘a person walking his dog in the park’ translated into French:”, GPT-3 answers “un homme qui promène son chien dans le parc.” This capability is called zero-shot reasoning.
We find that DALL·E extends this capability to the visual domain, and is able to perform several kinds of image-to-image translation tasks when prompted in the right way.
the exact same cat on the top as a sketch on the bottom Text Prompt Image Prompt AI generated images We find that DALL·E is able to apply several kinds of image transformations to photos of animals, with varying degrees of reliability. The most straightforward ones, such as “photo colored pink” and “photo reflected upside-down,” also tend to be the most reliable, although the photo is often not copied or reflected exactly. The transformation “animal in extreme close-up view” requires DALL·E to recognize the breed of the animal in the photo, and render it up close with the appropriate details. This works less reliably, and for several of the photos, DALL·E only generates plausible completions in one or two instances.
Other transformations, such as “animal with sunglasses” and “animal wearing a bow tie,” require placing the accessory on the correct part of the animal’s body. Those that only change the color of the animal, such as “animal colored pink,” are less reliable, but show that DALL·E is sometimes capable of segmenting the animal from the background. Finally, the transformations “a sketch of the animal” and “a cell phone case with the animal” explore the use of this capability for illustrations and product design.
the exact same teapot on the top with ’gpt’ written on it on the bottom Text Prompt Image Prompt AI generated images We find that DALL·E is able to apply several different kinds of image transformations to photos of teapots, with varying degrees of reliability. Aside from being able to modify the color of the teapot (e.g., “colored blue”) or its pattern (e.g., “with stripes”), DALL·E can also render text (e.g., “with ‘gpt’ written on it”) and map the letters onto the curved surface of the teapot in a plausible way. With much less reliability, it can also draw the teapot in a smaller size (for the “tiny” option) and in a broken state (for the “broken” option).
We did not anticipate that this capability would emerge, and made no modifications to the neural network or training procedure to encourage it. Motivated by these results, we measure DALL·E’s aptitude for analogical reasoning problems by testing it on Raven’s progressive matrices, a visual IQ test that saw widespread use in the 20th century.
a sequence of geometric shapes.
Text Prompt Image Prompt AI generated images Rather than treating the IQ test a multiple-choice problem as originally intended, we ask DALL·E to complete the bottom-right corner of each image using argmax sampling, and consider its completion to be correct if it is a close visual match to the original.
DALL·E is often able to solve matrices that involve continuing simple patterns or basic geometric reasoning, such as those in sets B and C. It is sometimes able to solve matrices that involve recognizing permutations and applying boolean operations, such as those in set D. The instances in set E tend to be the most difficult, and DALL·E gets almost none of them correct.
For each of the sets, we measure DALL·E’s performance on both the original images, and the images with the colors inverted. The inversion of colors should pose no additional difficulty for a human, yet does generally impair DALL·E’s performance, suggesting its capabilities may be brittle in unexpected ways.
Geographic knowledge We find that DALL·E has learned about geographic facts, landmarks, and neighborhoods. Its knowledge of these concepts is surprisingly precise in some ways and flawed in others.
a photo of the food of china Text Prompt AI generated images We test DALL·E’s understanding of simple geographical facts, such as country flags, cuisines, and local wildlife. While DALL·E successfully answers many of these queries, such as those involving national flags, it often reflects superficial stereotypes for choices like “food” and “wildlife,” as opposed to representing the full diversity encountered in the real world.
a photo of alamo square, san francisco, from a street at night Text Prompt AI generated images We find that DALL·E is sometimes capable of rendering semblances of certain locations in San Francisco. For locations familiar to the authors, such as San Francisco, they evoke a sense of déjà vu—eerie simulacra of streets, sidewalks and cafes that remind us of very specific locations that do not exist.
a photo of san francisco’s golden gate bridge Text Prompt Image Prompt AI generated images We can also prompt DALL·E to draw famous landmarks. In fact, we can even dictate when the photo was taken by specifying the first few rows of the sky. When the sky is dark, for example, DALL·E recognizes it is night, and turns on the lights in the buildings.
Temporal knowledge In addition to exploring DALL·E’s knowledge of concepts that vary over space, we also explore its knowledge of concepts that vary over time.
a photo of a phone from the 20s Text Prompt Image Prompt AI generated images We find that DALL·E has learned about basic stereotypical trends in design and technology over the decades. Technological artifacts appear to go through periods of explosion of change, dramatically shifting for a decade or two, then changing more incrementally, becoming refined and streamlined.
Summary of approach and prior work DALL·E is a simple decoder-only transformer that receives both the text and the image as a single stream of 1280 tokens—256 for the text and 1024 for the image—and models all of them autoregressively. The attention mask at each of its 64 self-attention layers allows each image token to attend to all text tokens. DALL·E uses the standard causal mask for the text tokens, and sparse attention for the image tokens with either a row, column, or convolutional attention pattern, depending on the layer. We provide more details about the architecture and training procedure in our paper.
Text-to-image synthesis has been an active area of research since the pioneering work of Reed et. al, [^reference-1] whose approach uses a GAN conditioned on text embeddings. The embeddings are produced by an encoder pretrained using a contrastive loss, not unlike CLIP. StackGAN [^reference-3] and StackGAN++ [^reference-4] use multi-scale GANs to scale up the image resolution and improve visual fidelity. AttnGAN [^reference-5] incorporates attention between the text and image features, and proposes a contrastive text-image feature matching loss as an auxiliary objective. This is interesting to compare to our reranking with CLIP, which is done offline. Other work [^reference-2] [^reference-6] [^reference-7] incorporates additional sources of supervision during training to improve image quality. Finally, work by Nguyen et. al [^reference-8] and Cho et. al [^reference-9] explores sampling-based strategies for image generation that leverage pretrained multimodal discriminative models.
Similar to the rejection sampling used in VQVAE-2 , we use CLIP to rerank the top 32 of 512 samples for each caption in all of the interactive visuals. This procedure can also be seen as a kind of language-guided search [^reference-16] , and can have a dramatic impact on sample quality.
an illustration of a baby daikon radish in a tutu walking a dog [caption 1, best 8 of 2048] Text Prompt AI generated images Reranking the samples from DALL·E using CLIP can dramatically improve consistency and quality of the samples.
Authors Primary Authors Aditya Ramesh Mikhail Pavlov Gabriel Goh Scott Gray Supporting Authors Mark Chen Rewon Child Vedant Misra Pamela Mishkin Gretchen Krueger Sandhini Agarwal Ilya Sutskever Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
" |
746 | 2,017 | "Google Street View's Window into How Americans Vote (Hint: Look at the Cars) | WIRED" | "https://www.wired.com/2017/03/google-street-views-window-americans-vote-look-cars" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Street View's Window into How Americans Vote (Look at the Cars) Michael Duva/Getty Images Save this story Save Save this story Save Led by Fei-Fei Li, the director of the Stanford University artificial intelligence lab and a newly minted Google employee, a team of academics recently explored a new way of tracking socioeconomic trends across the US. Rather than knocking on doors and asking questions, they pulled more than 50 million photos from Google Street View and fed them into neural networks.
The results were promising. Simply by identifying the make, model, and year of automobiles appearing in the photos, the researchers said, their tech could accurately estimate the income, race, education, and voting patterns of citizens in particular precincts.
If the number of sedans on a short stretch of road exceeded the number pickup trucks, for instance, they found that a city was 88 percent likely to vote for a Democrat during the next presidential election. If pickups exceeded sedans, a city was 82 percent likely vote Republican. "Our results suggest that automated systems for monitoring demographic trends may effectively complement labor-intensive approaches, with the potential to detect trends with fine spatial resolution, in close to real time," the researchers write in a recently-released paper detailing this study.
Machines will paint the more accurate picture of how humans think, live, and spend.
Fei-Fei and her colleagues declined to discuss their project because the paper is still under peer review. But their work reflects a much larger effort to gain more insight into broad societal and economic trends through new sources of data, crowdsourcing, and machine learning. In the years to come, machines---not statisticians---will paint the more accurate picture of how humans think, live, and spend.
At a San Francisco startup called Premise , machines parse data collected by an army of people spread across the world, building real-time consumer price indexes. A Palo Alto startup, Orbital Insight , uses artificial intelligence to analyze photos taken by satellites, identifying economic trends from what it finds. And various other researchers have predicted unemployment rates and poverty using everything from Twitter to cellphone metadata.
Fei-Fei and her collaborators see their methods as a replacement for the American Community Survey , a $250 million-a-year study conducted by the US Census Bureau that identifies a vast array of American demographic trends. Online data and machine learning, the researchers say, will reduce the cost of door-to-door demographic studies like these while providing greater accuracy. Door-to-door surveys, after all, don't operate in real time. They're out of date before they're finished.
88 New Satellites Will Watch Earth, All the Time, All the Places Class Polarizes Americans Just as Much as Politics The Electoral College Is Great for Whiter States, Lousy for Cities The methods outlined in Fei-Fei's study still require some on-the-ground data gathering to establish a baseline from which AI-powered techniques can extrapolate. But most of the process is automated. Well-trained neural networks can recognize the make, model, and year of cars in photos with much greater efficiency than humans. As described in the paper, the system needs only one-fifth of a second to sort a vehicle into any of 2,657 categories.
But if Street View photos offer one kind of insight, the view from space offers another path to automated forecasting. Orbital Insight now tracks 250,000 parking lots outside 96 retail chains across the country and uses the number of cars in lots as an indicator of company health. This quarter, for instance, the number of cars in JCPenney lots fell 10 percent.
Not surprisingly, perhaps, the retailer just announced the closure of about 130 stores amid declining sales. Premise, meanwhile, pays a network of people across the developing world to collect economic data on the ground---the price of canned coffee in a particular town, for instance, or the freshness of the lettuce on sale in another. Using machine learning techniques similar to those used to analyze Street View and satellite images, the company can then looks for price patterns.
Apply these methods across multiple retailers and multiple industries, and you have what starts to look like an unprecedented collection of economic indicators. Machines can detect patterns that humans can't, or at least with much greater speed and accuracy. As they get smarter, the promise is that these automated forecasts will provide a foundation not just for better economic planning but a better democracy. In a political climate beset by deniers of facts, the hope remains that better information will yield better decisions by the people with the power to make them.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence big data economics Enterprise Google machine learning Steven Levy Aarian Marshall Gregory Barber Will Knight Khari Johnson Khari Johnson Steven Levy Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
747 | 2,012 | "An Intentional Mistake: The Anatomy of Google's Wi-Fi Sniffing Debacle | WIRED" | "https://www.wired.com/2012/05/google-wifi-fcc-investigation" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons David Kravets Security An Intentional Mistake: The Anatomy of Google's Wi-Fi Sniffing Debacle Save this story Save Save this story Save Google's public version of events of how it came to secretly intercept Americans' data sent on unencrypted Wi-Fi routers over a two-year period doesn't quite mesh with what the search giant told federal regulators.
And if Google had its way, the public would have never learned the software on Google's Street View mapping cars was "intended" to collect payload data from open Wi-Fi networks.
A Federal Communications Commission document disclosed Saturday showed for the first time that the software in Google's Street View mapping cars was "intended" to collect Wi-Fi payload data, and that engineers had even transferred the data to an Oregon Storage facility. Google tried to keep that and other damning aspects of the Street View debacle from public review, the FCC said.
Google accompanied its responses to the FCC inquiry "with a very broad request for confidential treatment of the information it submitted," the FCC said, in a letter to Google , saying it would remove most of the redaction from the FCC's public report and other documents surrounding the debacle.
The FCC document unveiled Saturday is an unredacted version of an FCC finding, which was published last month with dozens of lines blacked out. The report said that Google could not be held liable for wiretapping, despite a federal judge holding otherwise.
The unredacted FCC report refers to a Google "design document" written by an engineer who crafted the Street View software to collect so-called payload data, which includes telephone numbers, URLs, passwords, e-mail, text messages, medical records, video and audio files sent over open Wi-Fi networks.
The engineer is referred to as "Engineer Doe" in the report, though he was identified on Sunday as Marius Milner , a well-known figure in the Wi-Fi hacking community. The document says the software Milner used collected 200 gigabytes of data via Street View cars between 2008 and 2010: The design document showed that, in addition to collecting data that Google could use to map the location of wireless access points, Engineer Doe intended to collect, store, and analyze payload data from unencrypted Wi-Fi networks. The design document notes that '[w]ardriving can be used in a number of ways,' including 'to observe typical Wi-Fi usage snapshots.' In a discussion of 'Privacy Considerations,' the design document states, 'A typical concern might be that we are logging user traffic along with sufficient data to precisely triangulate their position at a given time, along with information about what they were doing.' That statement plainly refers to the collection of payload data because MAC addresses, SSIDs, signal-strength measurements. and other information used to map the location of wireless access points would reveal nothing about what end users 'were doing.'" Engineer Doe evidently intended to capture the content of Wi-Fi communications transmitted when Street View cars were in the vicinity, such as e-mail, and text messages sent to or from wireless access points. Engineer Doe identified privacy as an issue but concluded that it was not a significant concern because the Street View cars would not be 'in proximity to any given user for an extended period of time,' and '[n]one of the data gathered ... [would] be presented to end users of [Google's] services in raw form. Nevertheless, the design document listed as a 'to do' item, '[D]iscuss privacy considerations with Product Counsel.' That never occurred. The design document also states that the Wi-Fi data Google gathered 'be analyzed offline for use in other initiatives,' and that '[analysis of the gathered data [was] a non goal (though it [would] happen.' Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The majority of those words were originally blacked out at Google's request, but the commission subsequently concluded, after the report was filed, that much of it should be made publicly available because "Disclosure of this information may cause commercial embarrassment, but that is not a basis for requesting confidential treatment." Rewind to May 2010, when Google announced the Street View debacle: So how did this happen? Quite simply, it was a mistake. In 2006 an engineer working on an experimental Wi-Fi project wrote a piece of code that sampled all categories of publicly broadcast WiFi data. A year later, when our mobile team started a project to collect basic WiFi network data like SSID information and MAC addresses using Google's Street View cars, they included that code in their software—although the project leaders did not want, and had no intention of using, payload data.
While those sentences are technically true, one would have no idea from reading it that the payload-slurping software was intentionally included and that project leaders had been informed, in detail, about the software. (Google's unnamed project manager claims not to have read Milner's design document.) In fact, an editorial from the Electronic Frontier Foundation in 2010 shows that even experts read Google's blog post to mean that the sensitive data was collected via an honest mistake by code-reusing engineers, rather than via an engineering team's intentional choice that was totally missed by management tasked with overseeing them, as the FCC report makes clear.
"[T]he company admitted that its audit of the software deployed in the Street View cars revealed that the devices actually had been inadvertently collecting content transmitted over non-password protected Wi-Fi networks.... Penalties for wiretapping electronic communications in the federal Electronic Communications Privacy Act (ECPA) only apply to intentional acts of interception, yet Google claims it collected the content by accident," wrote then-EFF attorney Jennifer Granick.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google also demanded that the FCC black out passages revealing that several engineers had access to the Street View code, and that payload data was reviewed by engineers on at least two occasions. The unredacted FCC report also showed that Google's supervision of the Street View project was "minimal." "In October 2006, Engineer Doe shared the software code and a 'design document' explaining his plans with other members of the Street View project. The design document identified "Privacy Considerations" and recommended review by counsel, but that never occurred. Indeed, it appears that no one at the company carefully reviewed the substance of Engineer Doe's software code or the design document," the unredacted document said.
Google management said publicly it did not realize it was sniffing packets of data on unsecured Wi-Fi networks in about a dozen countries until German privacy authorities began questioning what data Google's Street View mapping cars were collecting. Google, along with other companies, use databases of Wi-Fi networks and their locations to augment or replace GPS when attempting to figure out the location of a computer or mobile device.
Google initially stored "all Wi-Fi data in machine-readable format" on hard disks on each Street View car, "the Company ultimately transferred the data to servers at a Google data center in Oregon," the unredacted report revealed.
The FCC originally released a heavily redacted version of its investigation into the Street View debacle last month, fining the company $25,000 for stonewalling the investigation.
But the report had black bars over the key findings. The FCC followed procedures that allow companies to withhold business-related confidential information from the public. So, at Google's request, it initially redacted its report, known as a "notice of apparent liability," according to an e-mail from Tammy Sun, an FCC spokeswoman.
However, the FCC did not agree with Google's "broad requests for confidential treatment" and was moving to uncensor its report, which required giving Google an opportunity to protest the decision.
So Google decided to preempt the FCC.
On Saturday, a dumping ground day for news, Google forwarded a virtually unredacted version of the report to the Los Angeles Times.
The FCC posted its mostly unredacted version of the document on its website three days later.
Google declined to be interviewed for this story. Instead, it released a canned statement attributable to "a Google spokesperson." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "We decided to voluntarily make the entire document available except for the names of individuals. While we disagree with some of the statements made in the document, we agree with the FCC's conclusion that we did not break the law. We hope that we can now put this matter behind us." Both the redacted and unredacted FCC reports concluded that, between 2008 and 2010, "Google’s Street View cars collected names, addresses, telephone numbers, URLs, passwords, e-mail, text messages, medical records, video and audio files, and other information from internet users in the United States." But, the commission said, Google did not engage in illegal wiretapping because the data was flowing, unencrypted, over open radio waves.
The commission found that legal precedent -- and engineer Milner's invocation of the Fifth Amendment -- meant Google was off the hook for wiretapping. The FCC agreed with Google that its actions did not amount to wiretapping because the unencrypted Wi-Fi signals were "readily accessible to the general public." According to the Wiretap Act , amended in 1986, it's not considered wiretapping "to intercept or access an electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public." But U.S. District Judge James Ware, a California federal judge presiding over about a dozen lawsuits accusing Google of wiretapping Americans, ruled last year that Google could be held liable for wiretapping damages.
Judge Ware said that the FCC interpretation did not apply to open, unencrypted Wi-Fi networks and instead applied only to "traditional radio services" like police scanners. The lawsuits have been stayed, pending the outcome of Google's appeal.
Senior Staff Writer X X Topics Coverups Threat Level Andrew Couts Lily Hay Newman David Gilbert David Gilbert Andy Greenberg David Gilbert Andy Greenberg Justin Ling Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
748 | 2,017 | "AlphaGo Beats Top Go Grandmaster Ke Jie in First Match | WIRED" | "https://www.wired.com/2017/05/revamped-alphago-wins-first-game-chinese-go-grandmaster" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business An Improved AlphaGo Wins Its First Game Against the World's Top Go Player Deepmind founder Demis Hassabis at the Future of Go summit in Wuzhen.
Noah Sheldon for WIRED Save this story Save Save this story Save WUZHEN, CHINA — In the first game of his match with AlphaGo —the Go-playing machine built by researchers at Google's DeepMind lab—Chinese grandmaster Ke Jie opened with a move straight from the playbook of his artificially intelligent opponent. He aimed to beat AlphaGo with its own unusual style of play. But the gambit didn't work. After four hours and fifteen minutes of play, the 19-year-old grandmaster resigned, and AlphaGo grabbed a 1–0 lead in this best-of-three match.
Last year, in South Korea, AlphaGo topped the Korean grandmaster Lee Sedol , becoming the first machine to beat a professional Go player—a feat that most AI researchers believed was still years away, given the extreme complexity of the ancient Eastern game. Now, here in Wuzhen, China, AlphaGo is challenging Ke Jie, the current world number one.
According to Demis Hassabis, the CEO and founder of DeepMind, this time out the machine is driven by a new and more powerful architecture. It can now learn the game almost entirely from play against itself, relying less on data generated by humans. In theory, this means DeepMind's technology can more easily learn any task.
In January, under the pseudonym "Master," the AlphaGo's new incarnation played several of the world's top players in a series of online matches, including Ke Jie, and it won all 60 of its completed contests.
AlphaGo Is Back to Battle Mere Humans—and It’s Smarter Than Ever The Sadness and Beauty of Watching Google’s AI Play Go In Two Moves, AlphaGo and Lee Sedol Redefined the Future What the AI Behind AlphaGo Can Teach Us About Being Human Today's face-off against Ke Jie continues that streak. As the match began, Ke Jie chose to play black, meaning he would make the first move, and he opened with what's called a "3–3 point" strategy—a rather unusual opening that AlphaGo played regularly during the Master series in January. "He has changed since the Master games six months ago," match commentator Michael Redmond said of Ke Jie. "He is using a lot of Master’s moves." Indeed, since the Master series, Ke Jie has regularly used this kind of opening during matches with other grandmasters. "The influence of Alpha has been widespread," Ke Jie said during the post-game press conference, through an interpreter. For Hassabis, Ke Jie's adjustments provide further evidence that AlphaGo has changed the way grandmasters play the ancient game—and an indication of how artificial intelligence can augment what humans do, not just eclipse them.
Still, AlphaGo responded well to Ke Jie's opening. It took hold of the match much sooner than even the DeepMind team expected. Just three and a half hours into the game—which was slated for six or more—AlphaGo dominated so much of the board that match commentators gave Ke Jie little chance of clawing his way back into the match. Less than an hour later, he resigned.
"What's exciting is that AlphaGo just keeps getting better," said commentator Hajin Lee. "It was already so good before." Given AlphaGo's strong showing during the "Master series," few expect Ke Jie to win this week's match. But the contest provides an opportunity to gauge the continued progress of AlphaGo and, indeed, AI in general. Underpinned by machine learning techniques that are already reinventing everything from internet services to health care to robotics , AlphaGo serves as a proxy for the future of artificial intelligence.
Hassabis underscored this notion as the first game began, revealing that AlphaGo's new architecture was better-suited to tasks outside the world of games. Among other things, he said, the system could help accelerate the progress of scientific research, and significantly improve the efficiency of national power grids.
'It is like a god of a Go player.' Go Grandmaster Ke Jie For Google, the match doubles as an enormous PR opportunity, as the company angles to offer its online services in China. Though millions of phones in the country run Google's Android operating system, local government restrictions prevent the tech giant from offering official access to online services such as Gmail and its core search product. But Google has said it hopes to offer its services here in the future. As reporters arrived to cover the match, they received, among other things, a flyer describing Google's Translate app—in both English and Chinese.
Google Translate is now driven by deep neural networks , a breed of machine learning that also feeds AlphaGo.
If AlphaGo's showing so far is any indication, the revamped architecture really has paid off. During the first game, the upgrade was apparent to Ke Jie. "AlphaGo is a completely different player," he said after the game. "It is like a god of a Go player." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Writer X Topics artificial intelligence DeepMind Google Gregory Barber Caitlin Harrington Jacopo Prisco Nelson C.J.
Peter Guest Andy Greenberg Steven Levy Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
749 | 2,015 | "Harnessing AI to Make Your Boring Bank Statements Useful | WIRED" | "https://www.wired.com/2015/04/kasisto-and-moneystream" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Davey Alba Business Harnessing AI to Make Your Boring Bank Statements Useful Getty Images Save this story Save Save this story Save Old-school financial institutions are typically slow-moving giants. And that’s a shame, because banks also tend to accumulate deep troves of data on their customers that goes mostly untapped. If you’re a consumer looking for an answer to a specific question about your finances, tough luck. Your usual recourse would probably involve a lot of digging through bank statements and bank website pages, or endless hours on the phone with a customer service rep.
But as of late, a swell of banking startups are seeking to change this. They take all that undifferentiated data tucked into your bank statements, and then, harnessing artificial intelligence, transform and organize it into helpful information that people can actually understand---and act on.
Among these is Kasisto , a spin-off venture of SRI International---the creator of Siri.
The startup is testing a voice-recognition add-on for mobile banking apps that lets customers ask questions about their accounts. Users can ask Kasisto, “How much have I spent on fees?” or tell it, “I’m looking for a three-dollar transaction on my checking account,” and the system will return an answer. It’s a voice-activated assistant that, unlike Siri, isn’t a generalist.
Meanwhile, MoneyStream , a new service from a Silicon Valley startup of the same name, links your bank account to a range of services that together deliver personal finance predictions in the form of a simple calendar. In other words, you can see how much money you can expect in your bank account month to month. While other apps, notably Level , already aim to help you plan your daily budget, MoneyStream claims it can help you dig deeper into the more complex fluctuations that can upend the best-intentioned plans.
In developing these tools, Kasisto and MoneyStream join a multitude of other companies that are riding the cresting wave of artificial intelligence to surface information that's useful for humans. Companies from Google to Facebook to Baidu are using AI to drive voice and image recognition. Smaller startups are using AI to rethink stodgy practices from job hunting to finding a restaurant that perfectly suits your budget and dietary needs.
Now, both Kasisto and MoneyStream aim to inject personal money management with the same AI touch.
Kasisto Kasisto has deep knowledge of two things: the semantics of the financial services world and your own relationship with your bank. It then uses natural language processing and machine learning to serve up the information you’re looking for.
The company, which launched last June, has been in “friends and family” testing mode in the US and Asia, but its roster of clients already includes Spain-based banking group BBVA and Wells Fargo, among others. Kasisto CEO and co-founder Zor Gorelov explains his company has basically built two types of apps: an enterprise add-on that corporate clients—especially company CFOs and treasurers—will use internally, and a consumer-friendly app that will integrate into existing mobile banking apps. The team plans to roll the app out to consumers later this year.
Gorelov says Kasisto's tech is primed to integrate with existing smartphone hardware to extract relevant info anytime and anywhere it's needed. Ask the app, for instance, “How much have I spent in this store?” and the system will use location services to determine which store you’re shopping from and whether the store is offering any deals. Along with GPS, Kasisto can also look at consumption history and transactions between users, as wells as connect to credit-card linked offers to show you bargains that, based on your spending history, might be interesting to you.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg According to the app’s founders, remaking the financial services market is just the beginning. Eventually, they claim, clients will be able to train Kasisto to work with other industry verticals, such as healthcare, retail, and more. Even then, however, banks say Kasisto won't replace human customer service. “We see the app as complementary,” says Steve Ellis, executive vice president at Wells Fargo, one of Kasisto’s clients, pointing out that banks don't plan to close their branches just because online banking has become more popular.
Mike Bertrand remembers a time he would have been in hot water if not for MoneyStream. He had just used his credit card to pay for a big vacation and hadn't realized a telephone bill was due the next day. "I got this alert that my credit card was about to go over its limit and I was going to get hit with a fee," the MoneyStream CEO says. "It was because we had tied in the two data sources of AT&T and my credit card that I could do something about it. I moved some money around, and it wasn't an issue." That's the added value MoneyStream gives its customers, according to Bertrand's pitch. Simply link your bank account to the service (the web app is the most robust landing page for now), and it identifies your sources of income, your recurring bills, your credit cards, and your loans. After scooping in all the data it can glean from your bank statement, you can add new bills, utilities, accounts and credit cards, and also manage your email notifications and alerts. An algorithm analyzes this "stream," and, using AI, projects how much money you'll have in the bank for months to come, showing you the information in a straightforward calendar format. According to Bertrand, the system gets better with use as the stream gathers more historical data and users correct individual entries.
Bertrand acknowledges there are certain limitations to the way MoneyStream is currently set up. The algorithm does well for folks who see regular income, and not so great for people with, say, commission-based jobs who have high variability in their income. But, Bertrand claims, after starting off with the simplistic calculations for the limited amount of data it's able to examine, MoneyStream improves every day. Eventually, he says, the algorithm will be able to discern more of a user's spending patterns and come out with better predictions. It may even be able to classify you (anonymously) in a group of "people like you"—users with similar spending behavior—to provide more insights.
I tried MoneyStream myself, and the app cleverly picked up on several of my other accounts after linking just one bank account. It grabbed three credit cards, a savings and a checking account, my income stream, and even my student loan. After playing around with it a bit more, I was able to add another bank account and my PayPal account, which gave me a pretty good picture of my finances. But it couldn't make sense of my rent in the stream, which I usually pay my roommate in a lump sum along with utilities, using either a check or the money-transfer app Venmo.
Bertrand says that as time goes on, the system will learn and be able to do more. He says that its predictive abilities go beyond the historical data that has been the norm for financial planning apps for decades. "The exercise we’ve done here is, you know, the bank is where you've stored your money," he says. But where your money works for you is out in the world, he says. And it's understanding that motion that's valuable---not just the list of debits and credits your bank sends you every month.
Topics artificial intelligence Siri voice recognition Peter Guest Will Knight Steven Levy Khari Johnson Steven Levy Will Knight Morgan Meaker Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
750 | 2,018 | "Startups Race to Create Cancer Screens from DNA | WIRED" | "https://www.wired.com/story/startups-race-to-create-cancer-screens-from-dna" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Megan Molteni Business Startups Race to Create Cancer Screens from DNA Jamie Jones Save this story Save Save this story Save Silicon Valley is out for blood—and not just the rejuvenating blood of the young.
Biomedical engineers are enthralled by the promise of liquid biopsies , noninvasive tests that detect and classify cancers by identifying the tiny bits of DNA that tumors shed into the bloodstream. Studies at leading cancer centers have already shown the technology’s effectiveness in personalizing treatments after diagnosis.
Now startups are selling VCs a vision of cheap, surgery-free cancer screening even before symptoms appear.
Andreessen Horowitz, Google Ventures, Verily, and others have invested $77 million in Freenome , which uses machine learning to pinpoint immune-system responses that may indicate the presence of cancer. Freenome’s most prominent rival, Grail —which plans to harness next-generation gene sequencing to directly measure cancerous genomic alterations in the blood—raised $1.2 billion last year led by 1 ARCH Venture Partners. Both companies are racing to make the first DNA-detecting blood test to reveal disease in its earliest stages. It’s the holy—well, you know—of cancer care.
If this scientific sprint is giving you Theranos flashbacks, it should. Critics believe that even with the aid of low-cost genetic sequencing and high-powered algorithms , liquid biopsy detection is still years away from being patient-ready. The startups have shared scant data so far. (Grail has begun enrolling 130,000 patients in two huge trials, but it won’t have results for a few years.) Having secured massive infusions of funding, it’s not money holding these blood unicorns back, it’s basic biology.
This article appears in the February issue.
Subscribe now.
1 Correction appended, 1/24/18, 3:30 PM EDT: A previous version of this story misstated the Grail funding round as being from ARCH Venture Partners. It was led by the firm.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg X Topics magazine-26.02 cancer Biology Silicon Valley Khari Johnson Will Knight Steven Levy Will Knight Will Knight Peter Guest Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
751 | 2,016 | "A Radically Simple Idea Will Let Us Catch Cancer Before It’s Cancer | WIRED" | "https://www.wired.com/2016/12/a-radically-simple-idea-will-let-us-catch-cancer-before-its-cancer" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons K McGowan Backchannel A Radically Simple Idea Will Let Us Catch Cancer Before It’s Cancer Save this story Save Save this story Save In 2017, cancer might finally become number one. It would be an upset victory, given that heart disease has killed more people in the US every year for nearly a century. But it wouldn’t exactly be a surprise: As deaths from heart disease and stroke have dwindled, cancer has held strong, steadily narrowing the gap. If it doesn’t win out this year, it will soon. But there’s a new realization about why we’ve been losing the war on cancer for so long: Our battle plans are wrong. Typically, we wait until a tumor is big enough to feel or see before attacking it. By waiting until then, we often face a foe that has evolved over many years into a trickster beast, riddled with bizarre mutations that allow it to quickly thwart any drug we throw at it.
So many researchers are adopting a new approach: Deterrence rather than war. The idea is to shut down tumors before they get nasty, when the cells are still premalignancies — already funky and wrong and predisposed to cancer, but relatively docile and simple to deal with.
Whichever buzzword it goes by — interception, active prevention, early intervention—the concept is the same: “Can you intervene early enough so that you’re changing the natural history, changing the course of something that could potentially develop into cancer?” says medical oncologist Matthew Yurgelun of the Dana-Farber Cancer Institute.
In some ways, this idea is an offshoot of traditional prevention, which is vastly underrated. Right now, if everyone scrupulously followed the advice to avoid smoking, stay active, keep out of the sun, and stay lean, we’d cut the rate of cancer deaths by half. Interception is basically a logical next step: prevention with a take-charge, can-do attitude. As Paul Limburg, a professor of medicine at Mayo Clinic and principal investigator of the Cancer Prevention Network, puts it, “We’re at a time that is better positioned than ever in my 30-year career to do something new and novel in the field of cancer prevention.” Two big advances are behind this shift. One is the new view of the deep and complex relationships between the immune system and cancer. It took researchers a while to realize it, but the immune cells they see invading some tumors can actually hold back the malignancy, until the cancer eventually evolves away beyond it.
More recently, the success of drugs like Keytruda and Yervoy has showed that a liberated immune response can sometimes wipe out even late-stage cancers. The drugs only work on certain cancers and in a minority of people, but they’re proof of how powerful a properly stimulated immune system can be. Scientists are now eager to see what it might be able to do earlier in the disease’s progression.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “If you can intervene earlier, and ramp up the immune response early on in the process or prevent cells from evading the immune system, that could be your point of interception,” says Yurgelun.
One promising way to do so might be through cancer vaccines. Despite a history of flops, a recent research report identified 1,200 cancer vaccine projects in development, including both preventive ones and vaccines that act as therapies. Limburg’s Cancer Prevention Network, for example, is testing a vaccine aimed at people with precancerous growths in the colon. Nora Disis’s group at the University of Washington is just finishing up the first stage of a vaccine it hopes to use against breast cancer for women at high risk.
This new breed of cancer vaccines should be more effective than its failed precursors. Applying mathematical analysis to genome sequence data, researchers hope to predict which weird changes to tumor cells may be good targets for a vaccine, and bundle many of them together into a potent mix that can fire up the immune system. The quest for the best of these “neoantigens” is underway in many groups; a project from Sean Parker’s new cancer immunotherapy institute, announced in early December, is a collaboration/competition between some 30 research groups and companies to look for these targets in established cancers. Their first rough draft list is expected in the spring.
The longterm vision might be: A person at high risk because of genetics, or who has a premalignancy (maybe a former smoker with lesions of the lung picked up on CT) gets their tissue analyzed for suitable targets — mutations, or other misbehaving proteins. They’d get a vaccine, possibly aimed at their tumor type, or maybe custom built to hit those soft spots. They’d also get immunotherapy drugs that goose certain parts of the immune system and hold others back. If it worked, maybe they’d never really get cancer at all.
The second big insight behind interception is the idea of cancer as change over time, and a far more specific knowledge of how an easygoing precancer transforms into a menace. As Limburg puts it, “the disease is carcinogenesis, not cancer.” Once you know what the steps are, you can figure out how to stop them.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Good old aspirin, for example, is quite effective in preventing colon cancer if you take it long enough, reducing the risk by 30 percent after 10 years of daily pills. But for a long time nobody knew why, and because it usually isn’t clear who would benefit and who wouldn’t, it wasn’t widely recommended. Now, a better understanding of the precise effects it has — and a way to test which cancers it will interrupt — makes it possible to do “precision prevention,” predicting who should take aspirin and who shouldn’t. On top of that, two major studies of aspirin that test for the first time whether it lowers the overall death rate are expected to publish in 2017.
Drugs now used to treat established cancer might also keep precancers from progressing. The breast cancer drug tamoxifen, for example, prevents about one half of cancers in moderate-risk women. Other studies are now evaluating whether newer “targeted therapies” designed for specific mutations, or even immunotherapy drugs, might halt progression early on. “Moving these agents earlier in the process will probably portend greater benefit — the machinery is more there and able to respond,” says Ernest Hawk, vice president for cancer prevention at MD Anderson. In this vision, prevention and treatment are a continuum.
This year, these buzzwords will apply more to a shift in thinking than a shift in practice. Changes in medicine happen slowly, and prevention in particular takes a painfully long time to prove. Plus, though we know a lot about cancer, we have much to learn about the mysterious lives of precancers. Which funny-looking lumps will turn into dangerous tumors, and which won’t? Right now, it’s hard to tell.
The emerging technology of liquid biopsy, which analyzes the blood for tiny scraps of tumor DNA, may eventually make it easier to spy on what these cells are doing. It’s now used for established cancers, but many liquid biopsy developers hope to retool the tests toward early detection. But it will take years to prove that it really works.
Cancer researchers aim to copy what the cardiologists did, transforming a disease from a life-threatening crisis that requires dramatic emergency treatment into a problem you prevent. “Cancer is slower, and it’s more complex, but it’s going to follow the same process,” says Hawk. “The emphasis will ultimately be on prevention and treatment of those cases that sneak through, rather than now, where we have 90 percent emphasis on treatment and 5 percent on prevention.” It will require a granular understanding of cancer biology, new drugs and tests, and a massive campaign to change people’s minds about the nature of cancer. And in 2017, the campaign is finally beginning.
Creative Art Direction: Redindhi Studio Illustration by: Laurent Hrybyk [ The Future of Driving Is Now a Gold Rush *A crowded field of car companies, tech giants, and startups is racing to get self-driving cars into the hands of…*backchannel.com ](https://backchannel.com/the-future-of-driving-is-now-a-gold-rush-4c3a2ecd422c "https://backchannel.com/the-future-of-driving-is-now-a-gold-rush-4c3a2ecd422c")[ How Digital Nomads Went From Niche to Normal *As companies tighten their purse strings, they’re spreading out their hires—this year, and for years to come.*backchannel.com ](https://backchannel.com/in-2017-your-coworkers-will-live-everywhere-ae14979b5255 "https://backchannel.com/in-2017-your-coworkers-will-live-everywhere-ae14979b5255")[ Voice Is the Next Big Platform, and Alexa Will Own It *Amazon’s personal assistant is about to stretch beyond the Echo, and get downright chatty with everyone.*backchannel.com ](https://backchannel.com/voice-is-the-next-big-platform-and-alexa-will-own-it-c2cf13fab911 "https://backchannel.com/voice-is-the-next-big-platform-and-alexa-will-own-it-c2cf13fab911")[ Virtual Reality Won’t Be Lonely for Much Longer *New experiences are best when shared, and developers are figuring out how to push users to connect.*backchannel.com ](https://backchannel.com/virtual-reality-wont-be-lonely-for-much-longer-409d14dba3eb "https://backchannel.com/virtual-reality-wont-be-lonely-for-much-longer-409d14dba3eb")[ The AI Takeover Is Coming. Let’s Embrace It.
*Yes, millions of low-paying, low-skilled jobs are increasingly at risk. But there’s also much to gain from the coming…*backchannel.com ](https://backchannel.com/the-ai-takeover-is-coming-lets-embrace-it-d764d61f83a "https://backchannel.com/the-ai-takeover-is-coming-lets-embrace-it-d764d61f83a") Topics Backchannel Brandi Collins-Dexter Lauren Smiley Steven Levy Andy Greenberg Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
752 | 2,012 | "Google's Artificial Brain Learns to Find Cat Videos | WIRED" | "https://www.wired.com/wiredscience/2012/06/google-x-neural-network" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons WIRED Staff Science Google's Artificial Brain Learns to Find Cat Videos Save this story Save Save this story Save By Liat Clark, Wired UK When computer scientists at Google's mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do -- it began to look for cats.
[partner id="wireduk"] The "brain" simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a "deep learning" algorithm. This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.
"Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not," the team says in its paper, Building high-level features using large scale unsupervised learning , which it will present at the International Conference on Machine Learning in Edinburgh, 26 June-1 July.
"The network is sensitive to high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained it to obtain 15.8 percent accuracy in recognizing 20,000 object categories, a leap of 70 percent relative improvement over the previous state-of-the-art [networks]." The findings -- which could be useful in the development of speech and image recognition software, including translation services -- are remarkably similar to the "grandmother cell" theory that says certain human neurons are programmed to identify objects considered significant. The "grandmother" neuron is a hypothetical neuron that activates every time it experiences a significant sound or sight. The concept would explain how we learn to discriminate between and identify objects and words. It is the process of learning through repetition.
"We never told it during the training, 'This is a cat,'" Jeff Dean, the Google fellow who led the study, told the New York Times.
"It basically invented the concept of a cat." "The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data," added Andrew Ng, a computer scientist at Stanford University involved in the project. Ng has been developing algorithms for learning audio and visual data for several years at Stanford.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Since coming out to the public in 2011, the secretive Google X lab -- thought to be located in the California Bay Area -- has released research on the Internet of Things , a space elevator and autonomous driving.
Its latest venture, though not nearing the number of neurons in the human brain ( thought to be over 80 billion), is one of the world's most advanced brain simulators. In 2009, IBM developed a brain simulator that replicated one billion human brain neurons connected by ten trillion synapses.
However, Google's latest offering appears to be the first to identify objects without hints and additional information. The network continued to correctly identify these objects even when they were distorted or placed on backgrounds designed to disorientate.
"So far, most [previous] algorithms have only succeeded in learning low-level features such as 'edge' or 'blob' detectors," says the paper.
Ng remains skeptical and says he does not believe they are yet to hit on the perfect algorithm.
Nevertheless, Google considers it such an advance that the research has made the giant leap from the X lab to its main labs.
Image: peasap /Flickr Source: Wired.co.uk Topics artificial intelligence cats computer science Google neural networks Wired UK Dell Cameron Dhruv Mehrotra Max G. Levy Grace Browne Matt Simon Amit Katwala Max G. Levy Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
753 | 2,013 | "How Ray Kurzweil Will Help Google Make the Ultimate AI Brain | WIRED" | "https://www.wired.com/business/2013/04/kurzweil-google-ai" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business How Ray Kurzweil Will Help Google Make the Ultimate AI Brain null0 /Flickr Save this story Save Save this story Save Google has always been an artificial intelligence company, so it really shouldn’t have been a surprise that Ray Kurzweil , one of the leading scientists in the field, joined the search giant late last year. Nonetheless, the hiring raised some eyebrows, since Kurzweil is perhaps the most prominent proselytizer of “hard AI,” which argues that it is possible to create consciousness in an artificial being. Add to this Google’s revelation that it is using techniques of deep learning to produce an artificial brain, and a subsequent hiring of the godfather of computer neural nets Geoffrey Hinton , and it would seem that Google is becoming the most daring developer of AI, a fact that some may consider thrilling and others deeply unsettling. Or both.
On Tuesday, Kurzweil moderated a live Google hangout tied to a release of the upcoming Will Smith film, After Earth , presumably tying the film’s futuristic concept to actual futurists. The discussion touched on the necessity of space travel and the imminent resolution of the world’s energy problems with solar power. After the hangout, Kurzweil got on the phone with me to explore a few issues in more detail.
__WIRED: In the Google hangout you just finished, Will Smith said he had a copy of your book by his bedside because he’s been involved in a number of science fiction movies. How do you view science fiction? __ RAY KURZWEIL: Science fiction is the great opportunity to speculate on what could happen. It does give me, as a futurist, scenarios. It’s not incumbent upon science fiction creators to be realistic about time frames and so on. In this movie, for example, the characters come back to Earth a thousand years later and biological evolution has moved so far that the animals are quite different. That’s not realistic. Also, there’s very often a dystopian bent to science fiction because we can perceive the dangers of science more than the benefits, and maybe that makes more dramatic storytelling. A lot of movies about artificial intelligence envision that AI’s will be very intelligent but missing some key emotional qualities of humans and therefore turn out to be very dangerous.
What’s the key to predicting the future? I realized 30 years ago that the key to being successful is timing. I get a lot of new technology proposals, and I’d say 95% of those teams will build exactly what they claim if given the resources, but 95% of those projects will fail because the timing is wrong I did anticipate, for instance, that search engines would start emerging. Fifteen years ago Larry Page and Sergey Brin were in exactly the right place at the right time with the right idea You anticipated search engines? Yes. I wrote about that actually as early as The Age of Intelligent Machines , in the 1980s. [The book was published in 1990.] But did you predict that you would be working for a company that started as a search engine? That’s exactly the kind of thing you can’t predict. It would be very hard to predict that these couple of kids at Stanford would take over the world of search. But what I did discover is that if you examine the key measures of price performance and capacity of information technology, they form amazingly predictable smooth exponential curves. The price performance of computation has been rising in a very smooth exponential since the 1890 census. This has gone on through thick and thin, through war and peace, and nothing has affected it. I projected it out to 2050. In 2013, we’re exactly where we should be on that curve.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What are you working on at Google? My mission at Google is to develop natural language understanding with a team and in collaboration with other researchers at Google. Search has moved beyond just finding keywords, but it still doesn’t read all these billions of web pages and book pages for semantic content. If you write a blog post, you’ve got something to say, you’re not just creating words and synonyms. We’d like the computers to actually pick up on that semantic meaning. If that happens, and I believe that it’s feasible, people could ask more complex questions.
Are you participating in Jeff Dean’s program there to build an artificial " Google Brain ?" Well, Jeff Dean is one of my collaborators. He’s a fellow research leader. We are going be using his systems and his techniques of deep learning.
The reason I’m at Google is resources like that. Also the knowledge graph and very advanced syntactic parsing and a lot of advanced technologies that I really need for a project that really seeks to understand natural language. I can succeed at this much more readily at Google because of these technologies.
If your system really understood complex natural language, would you argue that it’s conscious? Well, I do. I’ve had a consistent date of 2029 for that vision. And that doesn’t just mean logical intelligence. It means emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That’s actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.
Will we get there simply by more computation and better software, or are there currently unsolved barriers that we have to hurdle? There are both hardware and software requirements. I believe we actually are very close to having the requisite software techniques. Partly this is being assisted by understanding how the human brain works, and we’re making exponential gains there. We can now see inside a living brain and see individual inter-neural connections being formed and firing in real time. We can see your brain create your thoughts and thoughts create your brain. A lot of this research reveals how the mechanism of the neocortex works, which is where we do our thinking. This provides biologically inspired methods that we can emulate in our computers. We’re already doing that. The deep learning technique that I mentioned uses multilayered neural nets that are inspired by how the brain works. Using these biologically inspired models, plus all of the research that’s been done over the decades in artificial intelligence, combined with exponentially expanding hardware, we will achieve human levels within two decades.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Do we really understand at all why someone’s brain can result in such an unique expression of a human? Take the transcendent intelligence of Einstein, the creativity of Steve Jobs, or the focus of Larry Page. What made those people so special? Do you have insights into that? I examine that very question, in fact, with regard to Einstein specifically in my recent book, How to Create a Mind.
Tell me.
There are two things. First of all, we create our brain with our thoughts. We have a limited capacity in the neocortex, estimated to be about 300 million pattern recognizers, which are organized in a hierarchy. We create that hierarchy with our own thinking. I would not explain Einstein’s brilliance based on him having 350 million or 400 million. We have approximately the same capacity. But he organized his brain to think deeply about this one subject. He was interested in the violin, but he was no Jascha Heifetz. And Jascha Heifetz had an interest in physics, but he was no Einstein. We have a capacity to do world-class work in one field. That’s part of the limited capacity of the brain, and Einstein really devoted it to this one field.
But lot of physicists are devoted to their one field, and only one became Einstein.
I didn’t finish. The other aspect is courage to follow your own thought experiments and not fall off the horse because the conclusions are so different from your previous assumptions or the common belief of society. People are so unable to accept thinking different than their peers that they immediately drop their thought pattern when it leads to absurd conclusions. So there’s a certain courage to go with your convictions. Clearly Steve Jobs had that. He had a vision and carried it out. It’s that courage of your convictions.
What’s the biological basis for that kind of courage? If you had an infinite ability to analyze a brain, could you say, “Oh, here’s where the courage is?” It is the neocortex, and people who fill up too much of their neocortex with concern about the approval of their peers are probably not going be the next Einstein or Steve Jobs.
Is this something one can control? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That’s a good question. I’ve been thinking about that and also why do some people readily accept the exponential growth of information technology and its implications, and other people are very resistant to it. I make the argument that hard-wired in our brain are linear expectations, because that worked very well 1000 years ago, tracking an animal in the wild. Some people, though, can readily accept the exponential perspective when you show them the evidence, and other people don’t. I’m trying to answer the question, what accounts for that? It really isn’t accomplishment level, intelligence, education level, socio-economic status. It cuts across all of those things. Some people’s neocortexes are organized so that they can accept the implications that they see in front of them without worrying too much about the opinion of others. Can we learn that? I would imagine yes, but I don’t have data to prove that.
Since we’ve been talking about Steve Jobs, let me bring up one of his famous quotes, from his speech at Stanford.
He said, “Death is very likely the single best invention of life. It’s life’s change agent.” You are very famously trying to extend your life indefinitely, so you reject that, right? Yes, This is what I call a deathist statement, part of a millennium-old rationalization of death as a good thing. It once seemed to make sense, because up until very recently you could not make a plausibly sound argument where life could be indefinitely extended. So religion, which emerged in prescientific times, did the next best thing, which is to say, ‘Oh, that tragic thing? That’s really a good thing.” We rationalized that because we did have to accept it. But in my mind death is a tragedy. Our initial reaction to hearing that someone has died is a profound loss of knowledge and skill and talents and relationships. It’s not the case that there are only a fixed number of positions, and if old people don’t die off, there’s no room for young people to come up with new ideas, because we’re constantly expanding knowledge. Larry Page and Sergey Brin didn’t displace anybody-- they created a whole new field. We see that constantly. Knowledge is growing exponentially. It’s doubling approximately every year.
And you think that dramatically extended life is possible.
I think we’re only 15 years away from a tipping point in longevity.
Editor at Large X Topics artificial intelligence Google Susan D'Agostino Will Knight Will Knight Will Knight Steven Levy Christopher Beam Reece Rogers Dhruv Mehrotra Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
754 | 2,017 | "Trump's cutting MILITARY science?! Hey that's SERIOUS | WIRED" | "https://www.wired.com/beyond-the-beyond/2017/06/trumps-cutting-military-science-hey-thats-serious" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Bruce Sterling Trump's cutting MILITARY science?! Hey that's SERIOUS Save this story Save Save this story Save *Hey man, if you're a scientist that makes bombs, that's supposed to be bulletproof. It's like the Iron Rice-Bowl of Physics.
*Also, not believing in climate science, that's okay sort-of, but not believing in ultra-advanced super-weapons, that's heresy.
Trump Budget Cuts Defense S&T by 5.8% While Funding Third Offset Priorities President Trump’s fiscal year 2018 budget requests a 5.8 cut to Department of Defense S&T accounts below currently enacted levels, but still above the fiscal year 2016 baseline. Funding for late-stage development, prototyping, and demonstration receives a substantial increase in line with DOD’s priorities under its Third Offset Strategy.
While President Trump’s fiscal year 2018 budget request increases defense spending by $54 billion, its funding proposal for the Defense Department’s S&T accounts is 5.8 percent below currently enacted levels. However, the budget request was formulated prior to a 7.8 percent increase in S&T funding enacted in fiscal year 2017 appropriations on May 5. The budget’s S&T proposal is 1.6 percent above the fiscal year 2016 level.
DOD S&T comprises three of the seven defense Research, Development, Test, and Evaluation (RDT&E) accounts: basic research, applied research, and advanced technology development. Under the Trump budget, spending on RDT&E as a whole would increase by about 14 percent, dominated by multi-billion-dollar boosts to late-stage development, prototyping, and demonstration activities. That increase would return those activities closer to the funding levels that prevailed between 2005 and 2010. This emphasis on late-stage work is consistent with DOD’s Third Offset Strategy, which stresses the need for near-term technological agility to maintain superiority in combat over increasingly sophisticated adversaries.
Funding for RDT&E activities, 2000–2015, from a Sept. 2016 presentation by then-Assistant Secretary of Defense for Research and Engineering Stephen Welby. BA 1=Basic Research, BA 2=Applied Research, BA 3=Advanced Technology Development, BA 4=Advanced Component Development & Prototypes, BA 5=System Development & Demonstration, BA 7=Operational System Development. (Image credit – courtesy of DOD) The chart below summarizes the changes the Trump administration is proposing for DOD’s S&T accounts. More details are available in FYI’s Federal Science Budget Tracker. DOD’s official budget documents can be accessed here.
Proposed S&T funding by military service The Trump administration’s proposed funding adjustments for DOD S&T are unevenly distributed among programs administered by the Army, Navy, and Air Force, and Defense-wide agencies such as the Defense Advanced Research Projects Agency, as illustrated in the following chart: In recent years, Army S&T has been subject to a tug-of-war between the White House and Congress, with congressional appropriators driving spending upward against the administration’s more modest ambitions. Following suit, the Trump request maintains funding levels that are relatively close to those last requested by the Obama administration but are over 20 percent below those enacted in fiscal year 2017 appropriations.
Similarly, the Trump request would provide funding for Navy S&T programs that is close to the Obama administration’s most recent request but 8.4 percent below the currently enacted level. However, the budget would restore about one third of the 16 percent cut that Congress agreed to impose on basic research spending for fiscal year 2017.
The Air Force S&T budget would be cut back by a comparatively small 3.6 percent versus currently enacted levels. Basic research would receive a disproportionate 7.3 percent decrease to a level slightly above the Obama administration’s most recent request.
Under the Trump request, DOD-wide S&T programs would continue a recent growth trend, with a spending increase of 3.1 percent. The largest percentage increase, 8 percent, would be directed toward applied research. Spending on the Defense Advanced Research Projects Agency, which draws from all three S&T accounts, would increase to $3.17 billion from the $2.87 billion fiscal year 2016 level, the most recent period for which figures are publicly available.
Third Offset initiatives continue to find support The Trump request’s support for DOD’s Third Offset Strategy is reflected not only in proposed spending increases for late-stage RDT&E activities, but also in its continued support for initiatives that DOD has promoted as elemental to the strategy.
Strategic Capabilities Office (SCO) The SCO’s budget would increase from about $900 million to over $1.2 billion. DOD regards the office, which develops new tactical uses for existing military technologies, as elemental to its ability to remain technologically nimble. The office is directed by William Roper, a physicist and one of DOD’s standard bearers for the Third Offset Strategy.
Defense Innovation Unit Experimental (DIUx) The Trump budget requests $29.6 million for DIUx through the DOD-wide advanced technology development account. In fiscal year 2017 appropriations, Congress provided only $10 million, or one-third of the Obama administration’s request, reflecting some skepticism of DOD’s vision for the unit. DIUx’s objective is to set up contracts with universities and fast-moving, innovation-focused companies that do not traditionally engage with DOD. Although small, DOD has continually promoted the unit as part of its Third Offset efforts. (In addition to its RDT&E funding, DIUx also currently receives about $15 million per year through DOD’s operations and maintenance budget.) Rapid Prototyping Program The fiscal year 2017 appropriations law created a special $100 million account for rapid prototyping activities. The Trump budget requests the same level for fiscal year 2018.
Note on federal R&D funding measures Following tradition, the Trump budget request includes an “Analytical Perspectives” document, which offers quantitative insights into the budget’s design, including government-wide figures for federal R&D funding. According to one measure, the document suggests that federal expenditure on R&D has decreased by 21 percent, and, according to a second, it has increased by 2 percent. Both figures are potentially misleading for reasons that the document explains.
The 21 percent decrease is an artifact of a recategorization beginning in fiscal year 2018 of certain activities previously classified as technology development, but that are now not classified as R&D at all. This change was implemented to bring the White House’s figures into closer alignment with those used in the National Science Foundation’s Science and Engineering Indicators. According to the new definition, DOD operational system development, which the Trump administration proposes to fund at almost $32 billion, no longer counts as R&D so its contribution simply disappears from the record.
Employing the old definition of R&D, the increase in federal R&D spending in the Trump budget is accounted for by the increase in DOD’s operational system development spending alone. However, excluding this account from currently enacted levels and the Trump budget request, the deep cuts the budget request makes to other federal R&D spending significantly outweigh the spending increases proposed for other late-stage defense programs and for R&D in the National Nuclear Security Administration.
Contact the Author William Thomas American Institute of Physics [email protected] (301) 209-3097 More From FYI NIST Science Down 13% in Trump Budget President Trump’s fiscal year 2018 budget requests a 23 percent cut for the National Institute of Standards and Technology. The proposed cuts fall hardest on NIST’s manufacturing programs, while its primary research account would see a 13 percent decrease. Read More > Trump Budget Cuts NASA Earth Science, Boosts Planetary Science The Trump administration’s fiscal year 2018 budget proposes to decrease funding for NASA’s Science Mission Directorate by about 1 percent relative to enacted fiscal year 2017 appropriations. Earth Science would see a 9 percent decrease and the cancellation of some missions, while Planetary Science would return almost to the record funding levels it enjoyed in the early 2000s.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics Beyond the Beyond Beyond the Beyond Ramin Skibba Khari Johnson Will Knight Amit Katwala Justin Ling Jaina Grey Andy Greenberg Simon Hill Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
755 | 2,017 | "Google Is Already Late to China's AI Revolution | WIRED" | "https://www.wired.com/2017/06/ai-revolution-bigger-google-facebook-microsoft" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Is Already Late to China's AI Revolution Google chairman Eric Schmidt in Wuzhen.
Noah Sheldon for WIRED Save this story Save Save this story Save Sitting on a stage in Wuzhen, China, a historic city up the river from Shanghai, Google chairman Eric Schmidt described what he called "the age of intelligence." But he wasn't talking about human intelligence. He meant machine intelligence. He trumpeted the rise of deep neural networks and other techniques that allow machines to learn tasks largely on their own, either by finding patterns in vast amounts of data or through their own trial and error.
At Google, using a sweeping software tool called TensorFlow , engineers have built deep learning systems that can identify faces and objects in photos, recognize commands spoken into smartphones, and translate one language into another. Schmidt called this the biggest technological change of his lifetime.
Then he mentioned China's three largest internet companies: Baidu, Tencent, and Alibaba. All three, he said, could benefit from TensorFlow, which Google open sourced about 18 months ago, sharing it with the world at large. "All of them would be better off if they used TensorFlow," Schmidt said of the Chinese internet giants. He said the software could predict what people want to purchase, help target ads, and even decide who should a get line of credit. "They can use TensorFlow to study the patterns of their business. They can use this technology to serve their customers faster." Delivered amidst the week-long Go match between Chinese grandmaster Ke Jie and AlphaGo, a seminal machine created by Google's DeepMind artificial intelligence lab, Schmidt's words were not hyperbole. Deep learning and related technologies are fundamentally changing the way Google works, and they will change so many other companies—even entire industries—over the next several years. The trouble is that Schmidt undersells how far these technologies have already spread beyond the walls of Google. The age of intelligence has moved ahead much farther than he admits—especially in China.
Schmidt's words accurately described the enormous power of modern neural networks. And they showed the enormity of Google's progress and ambition in this area. But if you read between the lines, they also showed the limits of the company's ambitions—namely: China. Though many in the West paint the deep learning revolution as a phenomenon driven by the big US internet companies, China is hardly far behind.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Noah Sheldon for WIRED Chinese companies like Baidu, Tencent, and Alibaba are already using these same technologies, as are US giants like Facebook, Microsoft, and Amazon. Google took an early lead, mainly because it bought up so much of the key talent. But many others have embraced deep learning in big ways, including the largest internet companies in China. "It's easy to fall into the old stereotype—the copy-to-China stereotype, that China is so far behind and they're just importing everything—but that's out of date," says Adam Coates, the American-born AI researcher who now oversees Baidu's Silicon Valley AI lab.
AlphaGo’s Designers Explore New AI After Winning Big in China Google’s AlphaGo Trounces Humans—But It Also Gives Them a Boost Google Unleashes AlphaGo in China—But Good Luck Watching It There As far back as 2013, Baidu started an internal research lab it called The Institute of Deep Learning , showing its own extreme ambitions. Now it runs several other labs, including the 200-person outpost in Silicon Valley. All told, the company employs more than 1,800 researchers and engineers who work on AI, including driverless cars and other robotics as well as many online services. Deep learning technology is already driving everything from the Baidu search engine to the company's image and speech recognition services. More than 18 months ago, the Chinese giant publicly revealed it was using neural networks to help target online ads—one of the particular tasks Schmidt said TensorFlow could help them with.
Tencent recently opened a stateside AI lab of its own. And like Alibaba and Baidu, it's now a regular part of the international AI conference circuit that plays such an important role in the progress of AI research in academia and across the industry. (Beijing hosted the International Conference on Machine Learning in 2015.) Meanwhile, these technologies are spreading beyond the big players and across the rest of the China. A San Francisco deep learning startup called Skymind recently created a subsidiary in mainland China to serve this burgeoning market. "China is latching on to everything it can," says Skymind founder Adam Gibson, who is now based in Asia, referring to deep learning technologies.
Clearly, Google sees the opportunities available across this enormous market—just as it sees opportunities for its AI technologies in so many other parts of the world. That's why Schmidt was in China last week alongside several of the key players in the company's push toward machine learning. These included Jeff Dean, the head of the Google Brain AI lab, and Jia Li, who helps oversees artificial intelligence across the company's increasingly important cloud computing services. Google withdrew its online services from China more than seven years ago, unhappy with government censorship laws and apparent state-sponsored hacking operations. But now it wants back in, and it sees AI as the available path. The Go match— a reprise of the historic match AlphaGo played in Korea last year —was an ideal starting point.
But although Google has taken a worldwide lead in machine learning, it's clearly a long way from really applying this expertise in China. Google's online services are still blocked in the country, and though the company collaborated with local authorities in organizing the event in Wuzhen last week, this collaboration has its limits. Two days before the event, state TV pulled out, and half-an-hour into the first Go game, all online broadcasts went dark. Media outlets covered the event with news stories but they avoided the name Google, apparently under instructions from the government.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The event still took place, but it seemed to betray Google's inability to win hearts and minds in the country. Even Americans were struck by the way Schmidt talked down to Baidu, Alibaba, and Tencent, when he should have done the opposite. "Some of the major Chinese companies are some of the most sophisticated deep learning and data companies in the world," says Skymind founder and CEO Chris Nicholson. "Google has misread China in the past, and I think that Eric Schmidt's speech is evidence it will continue to misread China and lose out on one of the biggest markets on earth." Schmidt may have pushed TensorFlow for a reason. It's the sole means of using its new TPU chip , a processor specifically designed for running deep neural networks that will soon be available via Google's cloud computing services. In many ways, Google sees cloud computing, where it rents raw computing resources to businesses and coders over the internet, as the future of the company.
That future would be much, much bigger if can get Chinese businesses on its cloud. But that reality is a long way off—at best.
Like the company's other online services, the Google cloud isn't available in China. And despite what Schmidt implied, Chinese companies like Baidu and Tencent are already starting to offer machine learning tools atop its own cloud computing services. It is indeed the age of intelligence—but the whole world already knows it.
Senior Writer X Topics artificial intelligence Facebook Google machine learning Microsoft Susan D'Agostino Christopher Beam Will Knight Niamh Rowe Steven Levy Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
756 | 2,017 | "The Coolest Things Announced at Google I/O | WIRED" | "https://www.wired.com/2017/05/slickest-things-google-debuted-today-big-event" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Arielle Pardes Gear The Slickest Things Google Debuted Today at Its Big Event Facebook X Email Save Story Facebook X Email Save Story If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Google Lens Read more A New Chip for AI in the Cloud Read more Google Assistant Comes to iOS Read more Google Home Learns New Skills Read more At this year's Google I/O, the company's annual developer conference and showcase, CEO Sundar Pichai made one thing very clear: Google is moving toward an AI-first approach in its products, which means pretty soon, everything you do on Google will be powered by machine learning. During Wednesday's keynote speech, we saw that approach seep into all of Google's platforms, from Android to Gmail to Google Assistant, each of which are getting spruced up with new capabilities thanks to AI. Here's our list of the coolest things Google announced today.
Google Lens One of the flashiest announcements from today was Google Lens, a new product that lets you search the world with your phone's camera. Let's say you're on a hike and want to know if that plant by your ankle is poison oak. Or maybe you're browsing through your vacation photos from Athens and can't remember the name of that crumbling ancient structure. Google Lens can offer information on exactly what you're seeing, in real time or in photos, plus help you interact with it. Point the camera at a restaurant and Lens will not only tell you the name, but pull up the menu and help you book a table. It promises a whole new way of scanning the real world the way you would with Google Search.
Read David Pierce's story on Lens.
Justin Sullivan/Getty Images A New Chip for AI in the Cloud Google’s rethinking its computing architecture for an AI-first world, starting with its homegrown Tensor Processing Unit. The new processor, called Cloud TPU, can be used both to run neural networks and train them, and will be open to anyone through Google's cloud computing platform. Confused? Cade Metz explains all the details here.
Justin Sullivan/Getty Images Google Assistant Comes to iOS You can already find Google Assistant on more than 100 million devices. Where's it going next? Everywhere else, of course! Google announced today that its Assistant will join Siri on iOS devices, and it's getting some fancy new features, too: You can interact with it through both speech and text, and use it to pay for things or make accounts. Google's also expanding the number of languages supported by Google Assistant: French, German, Brazilian Portuguese, and Japanese will roll out this summer; by the end of year, assistant will also be fluent in Spanish, Italian, and Korean.
Read David Pierce's story.
Eric Risberg/AP Google Home Learns New Skills Google Home launched just six months ago, and already the little smart speaker can play music, order delivery, or add that upcoming concert to your calendar. Now, it can also offer proactive assistance, like pointing out that you'll need to leave in the next 10 minutes if you want to make it to the concert on time, or prompting you to grab an umbrella because it's raining outside. Google also introduced new entertainment partners, like Spotify Free, Soundcloud, and HBO Now, so you can cue up more music and movies using Home. You can also use it to make hands-free phone calls, and since Google Home can recognize up to six individual users , it won't accidentally dial your mother-in-law when you say "call my mom." Eric Risberg/AP Google Photos Makes Sharing a Snap Remember all those great photos you took of the championship bowling game, but then forgot to share with the team? One of those would make a great profile picture for Sarah, if only you sent it to her. That's OK, because Google's got you. Google Photos already uses machine learning to organize your photos by people, places, and events; now, it can prompt you to share your best shots with the people in them. There's also a new option to share your entire library, or all photos of certain people (like, say, automatically sending all the photos of your kid to your wife) so no one can accuse you of photo-hoarding anymore.
Eric Risberg/AP Gmail Gets Smart Replies Writing emails is so boring.
Luckily, Google has a nice little hack to let AI do the work for you. Smart Reply---which Google first introduced on its Inbox app back in 2015 and is now rolling out to Gmail's one billion users---uses machine learning to scan the content of a message and suggest a reply. That email asking if you want to meet for dinner? Smart Reply might suggest "Sure," or "I already have plans," or "How about tomorrow instead?" and then sends your response with one click. Of course, Smart Reply is no Cyrano de Bergerac. The one-line replies can feel a little curt and dispassionate since, you know, they're written by a machine. But for getting through the drudgery of email, it does the trick.
Read Liz Stinson's story here.
Google Android O Now Available in Beta Google gave us another peek at their forthcoming OS, Android O, which comes with all kinds of bells and whistles: There's Google autofill on all its apps, smart text selection for easier copy-and-paste, and picture-in-picture functionality. Google's also promised stronger "vitals"---longer battery life, quicker boot times, and top-notch security features. For Android's 2 billion active users, that's pretty exciting. Android O been available as a developer preview for a few months, but the rest of us plebs can check it out in beta starting today.
Google Google for Jobs If you're looking for a new job, the first place you probably turn is a Google search. Now, you can try the separate search engine, Google for Jobs, tailor-made for finding work. Filter queries by job title, industry, or even commute time. The engine uses machine learning to cluster job titles that refer to the same thing, so you won't miss the listing for "store clerk" just because you searched for "retail associate." Google Standalone VR + WorldSense Last year, Google announced Daydream, a platform for mobile VR. Now, it's developing a standalone VR headset---no cables, no phone, no PC, just VR. It will come enabled with a feature called WorldSense that'll track your position in VR without the need for external sensors. The idea is to put the headset on and immediately immerse yourself in the virtual world.
Stephen Lam/REUTERS Senior Writer X Topics artificial intelligence Google io machine learning Boone Ashworth Boone Ashworth Adrienne So Eric Ravenscraft Eric Ravenscraft Medea Giordano Erica Kasper Julian Chokkattu WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
757 | 2,017 | "AI Isn't Smart Enough (Yet) to Spot Graphic Videos on Facebook | WIRED" | "https://www.wired.com/2017/04/ai-isnt-smart-enough-yet-spot-horrific-facebook-videos" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Emily Dreyfuss Business AI Isn't Smart Enough (Yet) to Spot Horrific Facebook Videos Save this story Save Save this story Save When Steve Stephens uploaded a 57-second video to Facebook of himself shooting and killing a man Sunday , the video stayed on Stephens' Facebook page for more than 2 hours before the company finally pulled it down. It was enough time for thousands of people to watch and share it on Facebook, and for third-party websites to download and reupload the video to their own servers. The incident reignited a fierce, if familiar, debate about what social media companies can do to keep gruesome content off of their sites and how these companies should go about removing offensive material. The murder also reminded us that once something hits the internet and gets shared around, it's incredibly difficult to scrub it from every corner of the web. So how much should companies do to prevent that content from appearing at all? The best way to prevent a graphic video from being seen is to never let it be uploaded in the first place. Facebook could take steps to prevent just that. It could insist that someone (or some thing ) watch every single video you try to post and allow it to be uploaded only after it's been approved. But if you had to wait for Facebook's approval of your video of a cat on a vacuum, you'd just post that video somewhere else. Facebook would alienate a large constituency of people who want the ability to immediately and easily share their lives. And Facebook can't afford that.
Others suggest Facebook simply delete offensive videos as soon as they're published, but there's one problem: it's not technically feasible to immediately pinpoint and delete graphic material. The technology isn't ready for algorithms to do it automatically, and it's impractical to hire enough humans to do it manually. If Facebook gave an algorithm the permission to pull down videos, it would inevitably make mistakes. And even if the algorithm got it right according to Facebook's terms of service (a big "if"), the company would be accused of censorship. That would have a chilling effect, because who would want to deal with the possibility of an algorithm wrongly deleting their videos? No one. Again, not something Facebook can afford.
Which is why right now, Facebook takes on a multi-pronged attack. The frontline is you, the Facebook user, who Facebook relies on to watch---and flag---videos like Stephens'. Backing you up in this task is some amount of AI, which can look out for things like videos with an ID known to be associated with child porn. When videos are flagged, they are sent to Facebook's content moderators, a cavalry of hundreds of thousands of humans whose job is to watch hours of footage and determine if it should be deleted. This system is imperfect, but human moderators remain smarter than AI---for now.
Eventually, though, AI will be able effectively flag videos like what was seen Sunday night, and when that day comes, it will be the realization of the promise that AI can work with humans---rather than replace them---to augment its skills. "I don’t think there is a task that, with enough trailing data, would not be possible to do, frankly," says Yann LeCun, director of AI research at Facebook. Though LeCun declined to answer questions about this particular video and how to fight it, what he's saying is that soon AI will be able to do more. It's not a matter of if Facebook will be able to use AI to monitor video in real-time and flag a murder, but of when.
In an ideal world, here's how Facebook would have handled Stephens' video: When he first uploaded himself saying he intended to kill people, AI-powered software would have "watched" that video immediately and flagged it as a high priority. That flag would have alerted Facebook's team of human moderators, who would have watched it, seen the direct and dire threat, removed the video, shut down Stephens' account, and alerted authorities.
That's not what happened. No one flagged the first video at all, according to a statement released yesterday by Justin Osofsky, Facebook's vice president of global operations. The second video---the one of of the murder itself---wasn't flagged until more than an hour and a half after Stephens uploaded it. Once a user flagged it, Osofsky said it took Facebook's moderators 23 minutes to take it down.
But this remains how the process has to work right now. Artificial intelligence is not sophisticated enough to identify the risk factors in that first video, or even necessarily in the second one that showed the murder. For AI to intervene, it would have needed to process Stephens' language; parse that speech and its intonation to differentiate it from a joke or a performance; and take the threat seriously. "There are techniques for this, but it is not clear they are integrated into the deep learning framework and can run efficiently. And there are kind of stupid mistakes that systems make because of lack of common sense," LeCun says. "Like if someone is twice the size, they are twice as close. There is common sense that all of us learn, animals learn too, that machines haven’t quite been able to figure out yet.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Facebook knows it needs AI to learn this. It is invested heavily in it---LeCun's team is second only to Google in advancing the field. And it already employs algorithms to help flag certain questionable content where computer vision is better suited---namely child pornography, nudity, and copyright violations. In an interview with WIRED last fall, Facebook CEO Mark Zuckerberg said that half of all flags on the network now come from AI as opposed to people. "This is an area where there are two forces that are coming together," he said. "There's this community that is helping people to solve problems on an unprecedented scale. At the same time, we're developing new technologies that augment what this community can do." Eventually, AI will be able to do it, and when that day comes, it will be the realization of the promise that AI can work with humans---rather than replace them.
But even Zuckerberg realizes that for now, human curators must continue to work alongside AI, and the video that Stephens uploaded on Sunday is a prime example of why. At the F8 developer conference in San Francisco Tuesday, Zuckerberg addressed this controversy directly. "We have a lot more to do here. We're reminded of this this week by the tragedy in Cleveland," he told the crowd. "And we have a lot of work, and we will keep doing all we can to prevent tragedies like this from happening." Training a computer to identify that kind of violence is much harder than merely asking it spot a naked body. It's a lot like trying to identify fake news : It requires a complex understanding of context cues and formats.
Since it will take time for Facebook to train its neural networks to streamline that process, in the immediate future Facebook will need to make changes to its moderation process, something the company acknowledges. In his statement after the incident, Osofsky said, "As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible." This will mean making it easier to flag high-priority content, adding more human moderators, and insisting they work faster. And these human moderators will have to continue training AI. That in itself is going to take a long time. Before AI can be trained to effectively identify offensive content, it need lots of examples to learn from. So the first thing it needs to lots of properly labeled enough data to use as fodder. That requires hourly-wage human employees to watch endless amounts of on-screen violence and threatening language---grueling work that takes time.
The challenge is even bigger when Facebook Live is taken into consideration. Live video is hard to control, which is why some people have called for Facebook to get rid of its Live feature completely. That's unrealistic; the company introduced it last year in order to compete with other live-streaming services, and it's not going anywhere. Additionally, the service has captured another side of violent incidents. Last year, after police shot Philando Castile, his girlfriend used Facebook Live to capture the aftermath of the shooting and essentially used the streaming service as a way to send a global SOS.
"Instant video and live video are here to stay, for better or worse," according to Jeremy Littau, assistant professor of journalism and communication at Lehigh University. "And Facebook has to compete in that reality." Short of getting rid of Live, Facebook could treat the features like broadcast networks do and insist that all video be on a delay. But for the reasons already articulated above, that delay wouldn't be of much use unless someone or something was monitoring every video, and that's not yet possible.
One thing Facebook could do is make it harder to download videos from Facebook, similar to how Instagram (also owned by Facebook) works. This could hinder third-party sites like Live Leak from grabbing and redistributing videos like the one Stephens uploaded Sunday. And while a small tweak like that won't stop the video from being uploaded in the first place, it could prevent it from being uploaded elsewhere, to enter the memory of the Internet forever, never to be erased.
Cade Metz contributed reporting.
Senior Writer X Topics Facebook David Gilbert Will Knight Amit Katwala Khari Johnson Andy Greenberg David Gilbert Kari McMahon Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
758 | 2,017 | "Facebook's Augmented Reality Engine Brings AI Right to Your Phone | WIRED" | "https://www.wired.com/2017/04/facebooks-augmented-reality-engine-brings-ai-right-phone" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Facebook's Augmented Reality Engine Brings AI Right to Your Phone Stephen Lam/Reuters Save this story Save Save this story Save When Hussein Mehanna showed off a new incarnation of Facebook's Big Blue App back in November, it seemed a tiny improvement—at least on the surface. The app could transform a photo from your cousin's wedding into a Picasso or a Van Gogh or a Warhol , a bit of extra fun for your social media day. But Mehanna and his team of Facebook engineers were laying the groundwork for an audacious effort to change the future of computing—what Facebook CEO Mark Zuckerberg calls a platform for augmented reality.
Here’s Everything Facebook Announced at F8, From VR to Bots Facebook’s Bizarre VR App Is Exactly Why Zuck Bought Oculus In Facebook’s Future, You Live Through Your Phone Facebook Streams a Murder, and Must Now Face Itself Zuckerberg formally unveiled this platform on Tuesday morning during his keynote at F8 , Facebook's annual developer conference. In short, Facebook is transforming the camera on your smartphone into an engine for what is commonly called AR. The company will soon allow outside companies and other developers to build digital effects that you can layer atop what you see through your camera. "This will allow us to create all kinds of things that were only available in the digital world," Zuckerberg said on stage at the civic center in downtown San Jose, California. "We're going to interact with them and explore them together." Initially, Facebook will offer ways of applying these effects to still images, videos, or even live videos shot with your phone. On stage, Zuckerberg showed how you could add a digital coffee cup to a photo of your kitchen table—or even add a school of digital sharks that swim endlessly around your bowl of cereal. But the company is also working on ways of "pinning" digital objects to specific locations in the real world. You could "attach" a digital note to your refrigerator, and if your spouse views the fridge through her camera, she could see it too, as if the note was really there. In other words, Zuckerberg views his platform as a way of expanding a game like Pokémon Go into a fundamental means of interacting with the world around us.
That's a bold play, to say the least. And frankly, it's a very difficult thing to pull off—just in a technical sense, let alone all the logistical questions that surround AR. Facebook will grapple with many of these questions in the months and years to come, most notably among them: Do people really want to view the world through their phones? But the company is already making serious progress on the technical side, as Mehanna's artist-filter demo made clear back in November.
In applying Picasso's style to personal snapshots, that new Facebook app leans on deep neural networks , a form of artificial intelligence that's rapidly reinventing the tech world. But these neural networks are different. They run on the phone itself , not in a data center on the other side of the internet. This is essential to the kind of augmented reality Zuckerberg so gleefully pitched on Tuesday morning. You can't do what he wants to do unless these AI techniques run right there on the phone. Going over the internet takes much too long. The effect is lost.
"You can think of those early demonstrations as somewhat frivolous," says Yann LeCun, Facebook's director of AI research and one of the founding fathers of the deep learning movement. "But the underlying techniques can be used for so much more." In order to layer a digital effect atop your smiling face, for instance, Facebook must identify exactly where your smiling face is within a camera's field of vision, and that requires a neural network. As LeCun explains, the company is also using neural networks to track people's movements, so that effects can move in tandem with the real world. And according to Facebook chief technology officer Mike Schroepfer, the company is exploring ways of adding effects based not only on what people are doing but what they're saying. That too requires a neural network. "We're trying to build a pipeline of the core technologies that will enable all of these common AR effects," he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Some of the effects that Zuckerberg described—most notably the technology that will let you pin stuff in the real world—are still months down the road, if not more. "There's a lot more that you have to get right to do that work," Schroepfer says. To attach a digital artifact to a physical location, the Facebook app must build what is really a detailed map of that location and then offer a way of sharing that map with others.
"If I want to leave a note on the table at the bar," he says, "I am both recording the precise location with GPS and recording the geometry of that scene in such a way that someone else, with a phone that was never there before, shows up and see the world and boot up this digital representation of it." What's more, as these effects get more and more complex, they will run up against the very real hardware limits of our phones. Smartphones offer far less processing power than computer servers packed into data centers, and though Facebook has significantly slimmed down its deep learning tech for mobile devices, more complex models will require more juice. But here too, the groundwork is already being laid.
Intel, Qualcomm, and other chip makers are working to build mobile processors better suited to these kinds of machine learning techniques.
According to Schroepfer, these types of hardware enhancements could provide a two to three-fold boost to the company's machine learning models.
"We've seen things go from 10 frames per second to thirty frames per second," he says. "That's the difference between it's-not-really-usable and it's-kinda-fun." Zuckerberg's grand vision for camera AR is still under development. But the path is in place—at least technically.
Senior Writer X Topics artificial intelligence augmented reality computer vision Facebook machine learning Morgan Meaker Reece Rogers Nelson C.J.
Peter Guest Andy Greenberg Steven Levy Will Knight Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
759 | 2,015 | "IBM's 'Rodent Brain' Chip Could Make Our Phones Hyper-Smart | WIRED" | "https://www.wired.com/2015/08/ibms-rodent-brain-chip-make-phones-hyper-smart" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business IBM's 'Rodent Brain' Chip Could Make Our Phones Hyper-Smart IBM Save this story Save Save this story Save Dharmendra Modha walks me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a '70s sci-fi movie, but Modha describes it differently. "You're looking at a small rodent," he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
Modha oversees the cognitive computing group at IBM, the company that created these "neuromorphic" chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week "boot camp" for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM's architecture and beginning to build software for the chip dubbed TrueNorth.
'We want to get as close to the brain as possible while maintaining flexibility.' Dharmendra Modha, IBM Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they're using the chip to run "deep learning" algorithms , the same algorithms that drive the internet's latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft's Skype.
But the promise is that IBM's chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
"What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption," says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who's exploring how deep learning could be applied to national security. "It lets us tackle new problems in new environments." The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they're moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich , TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM's chips do, recreating the neurons and synapses in the brain. One maps well onto the other. "The chip gives you a highly efficient way of executing neural networks," says Mars, who declined an invitation to this month's boot camp but has closely followed the progress of the chip.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it's years away from the market. For Modha, however, this is as it should be. As he puts it: "We're trying to lay the foundation for significant change." Peter Diehl recently took a trip to China, where his smartphone didn't have access to the `net, an experience that cast the limitations of today's AI in sharp relief. Without the internet, he couldn't use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google's distant servers. "The whole system breaks down," he says.
Deep learning, you see, requires enormous amounts of processing power—processing power that's typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.
To understand this, you have to understand how deep learning works. It operates in two stages. First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it's small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don't have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. "This is the future," Mars says. "We're going to see more of the processing on the devices." Google recently discussed its efforts to run neural networks on phones , but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
'Silicon operates in a very different way than the stuff our brains are made of.' The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular "instructions," the TrueNorth juggles "spikes," much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone's voice as they speak—or changes in color from pixel to pixel in a photo. "You can think of it as a one-bit message sent from one neuron to another." says Rodrigo Alvarez-Icaza, one of the chip's chief designers.
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Of course, using such a chip also requires a new breed of software. That's what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM's research lab in San Jose, California. In some cases, researchers are translating existing code into the "spikes" that the chip can read (and back again). But they're also working to build native code for the chip.
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. "That kind of talk always puts up warning flags," says Chris Nicholson, the co-founder of deep learning startup Skymind.
"Silicon operates in a very different way than the stuff our brains are made of." Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn't going to happen anytime soon. "Ambitions must be balanced with reality," he says.
In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. "You don't need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation," he says. "We want to get as close to the brain as possible while maintaining flexibility." This is TrueNorth. It's not a digital brain. But it is a step toward a digital brain. And with IBM's boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. "Humans use technology to transform society," Modha says, pointing to the room of researchers. "These are the humans." Senior Writer X Topics deep learning Enterprise Facebook Google IBM Will Knight Will Knight Susan D'Agostino Christopher Beam Will Knight Niamh Rowe Steven Levy Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
760 | 2,014 | "Buying Madbits, Twitter Wants Image-Search Super Powers | WIRED" | "https://www.wired.com/2014/07/buying-madbits-twitter-wants-image-search-super-powers" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Buying Madbits, Twitter Wants Image-Search Super Powers Save this story Save Save this story Save To understand why Twitter just bought an artificial intelligence company called Madbits, it helps to watch a video where a modern day computer learns to play a 35-year-old video game.
Captured at a conference in Paris this spring, the video (see above) shows a machine coming to grips with a game called Breakout , something so many kids spent so many hours playing on the Atari game console in the early '80s.
Breakout is kinda like Pong , where a tiny digital ball bounces around the screen and players use a tiny digital racket to knock it against various colored bricks, and at first, the machine does about as well as those kids in the early '80s, missing the ball on many occasions. But then the video shows that if the machine spends about two hours practicing, it becomes better at the game than any human could ever be. And after four hours, it not only hits the ball every time, but also figures out a wonderfully clever way of knocking down more bricks, more quickly.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The machine draws on an artificial intelligence technique known as a convolutional neural network.
With this technique---a rough mimic of the networks of neurons in the human brain---a computer can learn to better handle certain tasks by doing them over and over again. The machine in the video uses convolutional neural nets to learn Breakout , Pong , and other Atari games, but the technology is also very well suited to teaching machines how to recognize what's pictured in digital photos. And judging from research published by the founders of Madbits, it seems this type of artificial intelligence lies at the heart of the image recognition technology built by the tiny New York company.
Twitter and Madbits decline to discuss the acquisition, but in a brief message posted to the Madbits website, the company's founders---Clément Farabet and Louis-Alexandre Etezad-Heydari---do say that the company has built a "visual intelligence technology that automatically understands, organizes and extracts relevant information from raw media" and that this technology is based on "deep learning," a form of AI that includes convolutional neural nets. In any event, the video above---which shows off the work of another deep learning startup called DeepMind---goes a long way toward showing what this technology is all about. Deep learning is essentially a way for machines to very rapidly teach them themselves how to do stuff.
"By the end of the video, you can see how well the machine learned," says Adam Gibson, founder of a third deep learning startup called Skymind.
"Unlike human players, it takes really short jumps, never higher than it had to, which makes it faster." >Deep learning is essentially a way for machines to very rapidly teach them themselves how to do stuff.
Deep learning is so effective, most of the biggest names in tech are now applying it to their own internet services. Before Twitter acquired Madbits , Google bought both DeepMind and DNNresearch, a startup founded by the academic at the heart of the deep learning movement, Geoff Hinton. Microsoft used deep learning to built its new Skype Translation tool. And Facebook hired Yann LeCunn, another big-name researcher in the field.
Farabet and Etezad-Heydari, the founders of Madbits, were students of LeCun's at New York University. Information about the technology their company has built is scant, but Farabet published several papers related to convolutional neural nets while at NYU and his resume says that the Madbits technology is based on his previous research. Like other deep learning techniques, convolutional neural nets are basically multi-layered algorithms that run across a large number of computers, analyzing large amounts of data in an effort to learn the task at hand.
What the company does say is that its technology is a way of carefully examining images. "Over this past year, we've built visual intelligence technology that automatically understands, organizes and extracts relevant information from raw media," reads the company's webpage. "Understanding the content of an image, whether or not there are tags associated with that image, is a complex challenge." It is indeed. But researchers like Hinton, LeCun, and Farabet have already made some significant progress in this area. The trick with deep learning is that, by examining more and more images as time goes on, machines can get better and better at recognizing what's in them, and clearly, this is what Twitter is hoping to draw on. Google is already using convolutional neutral nets to automatically added textual tags to images posted to its Google+ social network, and this is only begins to show what deep learning is capable of. Like Facebook, Google, and others, Twitter could use such technology to power an image search engine, letting you more easily locate images posted to its social network, and it could better analyze the stuff you're posting to its service and use this information to tailor your experience accordingly, which could include carefully targeted ads.
Deep learning allows machines to process information more like humans do. But at the same time, as the Deepmind video shows, it allows machines to move beyond what humans are capable of. That is the goal not only for Twitter, but Microsoft, Facebook, Google, and so many others.
Senior Writer X Topics deep learning Enterprise Google twitter Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
761 | 2,016 | "In OpenAI's Universe, Computers Learn to Use Apps Like Humans Do | WIRED" | "https://www.wired.com/2016/12/openais-universe-computers-learn-use-apps-like-humans" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business In OpenAI's Universe, Computers Learn to Use Apps Like Humans Do Getty Images Save this story Save Save this story Save OpenAI, the billion-dollar San Francisco artificial intelligence lab backed by Tesla CEO Elon Musk , just unveiled a new virtual world. It's called Universe, and it's a virtual world like no other. This isn't a digital playground for humans. It's a school for artificial intelligence. It's a place where AI can learn to do just about anything.
Other AI labs have built similar worlds where AI agents can learn on their own. Researchers at the University of Alberta offer the Atari Learning Environment, where agents can learn to play old Atari games like Breakout and Space Invaders.
Microsoft offers Malmo, based on the game Minecraft. And just today, Google's DeepMind released an environment called DeepMind Lab. But Universe is bigger than any of these. It's an AI training ground that spans any software running on any machine, from games to web browsers to protein folders.
What the AI Behind AlphaGo Can Teach Us About Being Human Soon We Won’t Program Computers. We’ll Train Them Like Dogs Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free "The domain we chose is everything that a human can do with a computer," says Greg Brockman, OpenAI's chief technology officer.
In coder-speak, Universe is a software platform---software for running other software---and much of it is now open source, so anyone can use and even modify it. In theory, AI researchers can plug any application into Universe, which then provides a common way for AI "agents" to interact with these applications. That means researchers can build bots that learn to navigate one application and then another and then another.
For OpenAI, the hope is that Universe can drive the development of machines with "general intelligence"—the same kind of flexible brain power that humans have. "An AI should be able to solve any problem you throw at it," says OpenAI researcher and former Googler Ilya Sutskever. That's a ridiculously ambitious goal. And if it's ever realized, it won't happen for a very long time. But Sutskever argues that it's already routine for AI systems to do things that seemed ridiculously ambitious just a few years ago.
He compares Universe to the ImageNet project created by Stanford computer scientist Fei-Fei Li in 2009. The goal of ImageNet was to help computers "see" like humans. At the time, that seemed impossible. But today, Google's Photo app routinely recognizes faces, places, and objects in digital images. So does Facebook. Now, OpenAI wants to expand artificial intelligence to every dimension of the digital realm---and possibly beyond.
In Universe, AI agents interact with the virtual world by sending simulated mouse and keyboard strokes via what's called Virtual Network Computing, or VNC.
In this way, Universe facilitates reinforcement learning, an AI technique where agents learn tasks by trial and error, carefully keeping tabs on what works and what doesn't, what brings the highest score or wins a game or grabs some other reward. It's a powerful technology: Reinforcement learning is how Google's DeepMind lab built AlphaGo, the AI that recently beat one of the world's top players at the ancient game of Go.
But with Universe, reinforcement learning can happen inside any piece of software. Agents can readily move between applications, learning to crack one and then another. In the long run, Sutskever says, they can even practice "transfer learning," in which an agent takes what it has learned in one application and applies it to another. OpenAI, he says, is already building agents that can transfer at least some learning from one driving game to another.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Michael Bowling, a University of Alberta professor who helped create the Atari Learning Environment, questions how well Universe will work in practice, if only because he hasn't used it. But he applauds the concept---an AI proving ground that spans not just games but everything else. "It crystallizes an important idea: Games are a helpful benchmark, but the goal is AI." OpenAI Still, games are where it starts. OpenAI has seeded Universe with about a thousand games, securing approval from publishers like Valve and Microsoft. It's also working with Microsoft to add Malmo and says it's interested in adding DeepMind Lab as well.
Games have always served as a natural training tool for AI. They're more contained than the real world, and there's a clear system of rewards, so that AI agents can readily learn which actions to take and which to avoid. Games aren't ends in and of themselves, but they've already helped create AI that has a meaningful effect on the real world. After building AI that can play old Atari games better than any human ever could, DeepMind used much the same technology to refine the operation of Google's worldwide network of computer data centers, reducing its energy bill by hundreds of millions of dollars.
The digitized chaos of Grand Theft Auto , the thinking goes, can help autonomous vehicles learn to handle the unexpected.
Craig Quiter is using Universe with a similar goal in mind. Quiter helped build the platform at OpenAI before moving across town to Otto, the self-driving truck startup Uber acquired this summer in a deal worth about $680 million. Last month, drawing on work from several engineers who worked on autonomous cars inside Google, Otto's driverless 18-wheeler delivered 50,000 cans of Budweiser down 120 miles of highway from Fort Collins to Colorado Springs. But Quiter is looking well beyond the $30,000 in hardware and software that made this delivery possible. With help from Universe, he's building an AI that can play Grand Theft Auto V.
Today, Otto's truck can navigate a relatively calm interstate. But in the years to come, the company hopes to build autonomous vehicles that can respond to just about anything they encounter on the road, including cars spinning out of control across several lanes of traffic. The digitized chaos of Grand Theft Auto , the thinking goes, can help the AI controlling those vehicles learn to handle the unexpected.
Meanwhile, researchers at OpenAI are already pushing Universe beyond games into web browsers and protein folding apps used by biologists. Andrej Karpathy, the lead researcher of this sub-project, dubbed World of Bits, questions how useful games will be in building AI for the real world. But an AI that learns how to use a web browser is, in a sense, already learning to participate in the real world. The web is part of our daily lives. Navigating a browser web services both motor skills and language skills. It's a gateway to any software or any person.
The rub is that reinforcement learning inside a web browser is a far more difficult to pull off. Universe includes a deep neural network that can automatically read scores from a game screen in much the same way neural nets can recognizes objects or faces in photos. But web services have no score. Researchers must define their own reward functions. Universe allows for this, but it's still unclear what rewards will help agents, say, sign into a website or look up facts on Wikipedia, tasks that OpenAI is already exploring.
But if we can teach machines these more amorphous tasks---teach AI agents to do anything on a computer---Sutskever believes we can teach them to do just about anything else. After all, an AI that can't browse the internet unless it understands the natural way we humans talk. It can't play Grand Theft Auto without the motor skills of a human. And like so many others, Quiter argues that navigating virtual worlds isn't so different from navigating the real world. If Universe reaches is goal, then general intelligence isn't that far away. It's a ridiculous aim---but it may not be ridiculous for long.
Update: This story has been updated with mention of DeepMind Lab.
Senior Writer X Topics artificial intelligence Enterprise Self-Driving Cars Uber video games Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
762 | 2,016 | "Google's Improbable Deal to Recreate the Real World in VR | WIRED" | "https://www.wired.com/2016/12/googles-improbable-deal-recreate-real-world-vr" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google's Improbable Deal to Recreate the Real World in VR Island Creator/Google Save this story Save Save this story Save Let a thousand virtual worlds rain down from the clouds. Or rather, the cloud. That's the call from Google as it gets behind a tiny British startup called Improbable.
Founded by two Cambridge graduates and backed by $20 million in funding from the venture capitalists at Andreessen Horowitz, Improbable offers a new way of building virtual worlds, including not just immersive games à la Second Life or World of Warcraft , but also vast digital simulations of real cities, economies, and biological systems. The idea is that these virtual worlds can run in a holistic way across a practically infinite network of computers, so that they can expand to unprecedented sizes and reach new levels of complexity.
So far, the startup has shared its technology with just a handful of coders and companies. But today, Improbable joined forces with Google to offer its creation, called SpatialOS , to anyone who wants it.
You can think of SpatialOS as a cloud computing service for building virtual worlds, whether they run on desktop computers or VR rigs like the Oculus Rift. This service runs atop the Google Cloud Platform , the tech giant's growing cloud computing empire , and the two companies just opened a SpatialOS alpha program that lets coders prototype and test their own virtual worlds. When the beta launches in the first quarter of next year, a separate program will provide coders with free time on Google's cloud as they hone these virtual worlds for release onto the internet at large.
On one level, this partnership allows Google to promote its cloud services as it challenges rivals like Amazon Web Services and Microsoft Azure. In providing of the cloud infrastructure that underpins Pokemon Go, Google has seen the thirst for virtual and augmented reality firsthand, and now, with Improbable, it hopes to push even further into this burgeoning market. But this partnership also points to something bigger down the road: the future of AI.
As developers build more complex virtual worlds, this provides AI researchers with better ways of training the next generation of artificial intelligence. Games have long offered a proving ground for AI, but SpatialOS can help expand this proving ground, providing a way not only for AI agents to learn the successor to Second Life , but to navigate real city streets or even trace the path of contagious disease.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If AI agents set loose in virtual simulations of the real world sounds like Gibsonian science fiction, consider Universe, an AI training ground just recently unveiled by OpenAI , the lab bootstrapped by Tesla CEO Elon Musk and Y Combinator president Sam Altman. Universe is a software platform where researchers can train AI agents to use any application, from games to web browsers to protein folding simulations---anything humans can do on a computer. In theory, you could train agents to navigate any of the beefed-up virtual worlds built with Improbable.
That opens AI research to a few frontier. Game designers Dean Hall (creator of Day Z ) and Henrique Olifiers (CEO of Bossa Studios, maker of World Adrift) say Improbable allows massively multiplayer games to achieve unprecedented complexity and scale. And in an effort to understand the impact of autonomous cars, a UK startup called Immense Simulations is using the service to model entire cities. "We can cover really large geographical areas," says CEO Robin North, "but still keep a high level of detail." In the end, such simulations could also provide training grounds for those autonomous cars. Craig Quiter, an engineer at Otto, the robo-vehicle company owned by Uber, is training AI agents on Grand Theft Auto as a stepping stone to more advanced self-driving cars. Swap Grand Theft Auto for a simulation of the city of Manchester, and you get even closer to that goal.
Improbable CEO Herman Narula stresses that today his service is mainly a way of building games. But he too sees it as a path to better AI, hinting that his company is already working with others toward this goal. If a thousand virtual worlds take shape, so too can a thousand AIs.
Senior Writer X Topics artificial intelligence Enterprise Google VR Steven Levy Will Knight Susan D'Agostino Will Knight Niamh Rowe Will Knight Christopher Beam Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
763 | 2,016 | "Cozmo Is an Artificially Intelligent Toy Truck That's Also the Future of Robotics | WIRED" | "https://www.wired.com/2016/07/cozmo-artificially-intelligent-toy-truck-thats-also-future-robotics" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Cozmo Is an Artificially Intelligent Toy Truck That's Also the Future of Robotics LEAH VERWEY/Anki Save this story Save Save this story Save Hanns Tappeiner types a few lines of code into his laptop and hits “return.” A tiny robot sits beside the laptop, looking like one of those anthropomorphic automobiles that show up in Pixar’s Cars movies. Almost instantly, it wakes up, rolls down the table, and counts to four. This is Cozmo— an artificially intelligent toy robot unveiled late last month by San Francisco startup Anki —and Tappeiner, one of the company’s founders, is programming the little automaton to do new things.
<a href="https://www.wired.com/2016/07/cozmo-artificially-intelligent-toy-truck-thats-also-future-robotics/" class="clearfix no-hover"> <img width="600" height="338" src="https://www.wired.com/wp-content/uploads/2016/07/Cozmo-Lifestyle-002-600x338-e1468388562502.jpg" class="attachment-600-338-full size-600-338-full wp-post-image" alt="Cozmo-Lifestyle-002.jpg"> Related Video The programs are simple—he also teaches Cozmo to stack blocks—but they’re supposed to be simple. Tappeiner is using Anki’s newly unveiled software development kit—an SDK, in coder parlance—that he says even the greenest of coders can use to tweak the behavior of the toy robot. And that’s a big deal, at least according to Anki. The company claims the SDK is the first of its kind: a kit that lets anyone program such an intelligent robot, a robot that recognizes faces and navigates new environments and even mimics emotions. With the kit, Tappeiner says, “we’re trying to advance the field of robotics.” He compares the move to Apple letting people build apps for the iPhone.
This is the kind of talk that accompanies just about every new contraption that emerges from Silicon Valley. But Anki has enjoyed an especially big dollop of hype. Big-name venture capitalist Marc Andreessen, who led Anki’s $50 million funding round in 2013, calls the company “ the best robotics startup I have ever seen.
” That may sound even stranger when you consider that Cozmo is a toy—a $180 gadget that might show up in a stocking at Christmas—but it also carries some truth. When it comes to intelligent robots, Cozmo represents the state of the art. Or thereabouts. The state of the art is ready for the world of toys, but not much else.
Anki Tappeiner and his colleagues, a gaggle of PhDs who emerged from the robotics group at Carnegie Mellon University, will tell you much the same thing. Like so many others in the field, they admire the impressively mobile robots created by Google-owned Boston Dynamics, whose dog- and human-like droids radiate mechanical charisma.
But Tappeiner questions how long it will be before these robots are genuinely useful. “Does it really make sense for us to create a farming robot—or will it take 20 years to really do that well? We can do this,” he says, nodding at Cozmo, “really incredibly well.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What’s more, he believes, Cozmo can provide a seedbed for the future. Offering tools that even kids could use, a kit like Cozmo’s SDK could help breed a new generation of robotics researchers. But it also gives seasoned robotics researchers a path into the heart of this toy automaton, and that can help advance today’s work. “When we were in grad school,” says Anki CEO Boris Sofman, “you would have to pay $10,000 for a platform that had 10 to 15 percent of the capabilities of Cozmo.” Nate Koenig, chief technology officer of the Open Source Robotics Foundation and a longtime robotics researcher, says Cozmo deserves some skepticism. “How expressive is it? Does it really respond to humans?” he says. “I would definitely be cautious before buying.” But he also says this kind of inexpensive yet malleable and at least marginally intelligent device can feed new avenues of research. “Any robot that you can program to have even some basic level of emotional contact with a person is a great research tool,” he says.
Don’t we already have robots that are far smarter than this toy? Not really. In the commercial world, robots often work on assembly lines or move stuff across warehouses. But these machines are pretty much hard-wired for specific tasks. Inside an Amazon distribution center, a Kiva robot just picks up a bin and moves it. It’s not teaching itself to play chess during its down time.
<a href="https://www.wired.com/2016/02/boston-dynamics-new-robot-wicked-good-getting-bullied/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="Boston Dynamics’ New Robot Is Wicked Good at Standing Up to Bullies" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2016/02/atlas-new-200x200-e1456280953989.jpg" alt="Boston Dynamics' New Robot Is Wicked Good at Standing Up to Bullies" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Boston Dynamics’ New Robot Is Wicked Good at Standing Up to Bullies <a href="https://www.wired.com/2015/06/stop-laughing-clumsy-humanoid-robots/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="Stop Laughing at Those Clumsy Humanoid Robots" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2015/06/GettyImages-476174734-200x200.jpg" alt="Stop Laughing at Those Clumsy Humanoid Robots" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Stop Laughing at Those Clumsy Humanoid Robots <a href="https://www.wired.com/2016/04/harvard-built-robot-teach-kids-code/" class="clearfix pad no-hover"><img role="presentation" data-pin-description="Root Is a Little Robot on a Mission to Teach Kids to Code" tabindex="-1" aria-hidden="true" src="https://assets.wired.com/photos/w_200,h_200/wp-content/uploads/2016/04/harvard-robot-200x200-e1460679431530.jpg" alt="Root Is a Little Robot on a Mission to Teach Kids to Code" class="landscape thumbnail 200-200-thumbnail thumb col mob-col-6 med-col-6 big-col-6" width="200" height="200" itemprop="image"> Root Is a Little Robot on a Mission to Teach Kids to Code Yes, we’re moving toward robots than can respond to their environment and learn to do new things on their own. At a lab in Austin, Texas, IBM is plugging robots into its Watson AI services, which can understand and respond to questions and requests—at least in some cases. Last year, the US Defense Department’s Darpa research arm held an extravagant contest for intelligent robots. Google is now using technique called reinforcement learning—one of the techniques that helped bootstrap AlphaGo, the Google system that cracked the ancient gamer of Go —to teach robots how to pick up random objects. And researchers at the University of California, Berkeley, have used another key AlphaGo technology, deep neural networks, to teach machines how to screw one bottle caps. But all of these projects are really just experiments. The Darpa grand challenge was a failure, albeit a funny one.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anki In some ways, Cozmo is not quite at the forefront of robotics research. It doesn’t use deep neural networks, an AI technique that promises to reinvent robotics by allowing machines to learn tasks through the analysis of enormous amounts of data. It doesn’t have necessary on-board processing power for this, and since it doesn’t connect to the Internet, it can’t grab this power from distant servers. But using other techniques that aren’t as dependent on data analysis, Cozmo can recognize your face. It can pick up and move a set of blocks, even if they’re not carefully arranged. And by tracking certain events—Did it almost fall off the table? Did it just beat you at a game? Is it having trouble finding something?—it can mimic emotions. If Cozmo comes to the edge of a table, it might look scared. If it has just lost a game to you, it may pout and look to play with someone else.
As Koenig explains, this sort of “expressiveness” provides a foundation for others to build on. Thanks to the new SDK, researchers might even decide to connect Cozmo to other AI engines, including deep neural nets—a possibility Tappeiner says Anki itself may explore as well. Eventually, as hardware continues to improve, bots like Cozmo will be able to use AI like deep neural nets without needing to stay constantly connected to huge data centers in the cloud. Companies like Google and IBM are already pushing in this direction.
So, Cozmo is a toy. But it’s also the future. That future may be a long way off. But we have to start somewhere.
Senior Writer X Topics Amazon Enterprise Google IBM robots Aarian Marshall Paresh Dave Steven Levy Steven Levy Will Knight Gregory Barber Peter Guest Paresh Dave Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
764 | 2,015 | "Google's AI Is Now Smart Enough to Play Atari Like the Pros | WIRED" | "https://www.wired.com/2015/02/google-ai-plays-atari-like-pros" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Robert McMillan Business Google's AI Is Now Smart Enough to Play Atari Like the Pros Google DeepMind Save this story Save Save this story Save Last year Google shelled out an estimated $400 million for a little-known artificial intelligence company called DeepMind. Since then, the company has been pretty tight-lipped about what's been going on behind DeepMind's closed doors, but here's one thing we know for sure: There's a professional videogame tester who's pitted himself against DeepMind's AI software in a kind of digital battle royale.
The battlefield was classic videogames. And according to new research published today in the science magazine Nature , Google's software did pretty well, smoking its human competitor in a range of Atari 2600 games like Breakout, Video Pinball , and Space Invaders and playing at pretty close to the human's level most of the time.
Google didn't spend hundreds of millions of dollars because it's expecting an Atari revival, but this new research does offer a hint as to what Google hopes to achieve with DeepMind. The DeepMind software uses two AI techniques---one called deep learning; and the other, deep reinforcement learning. Deep-learning techniques are already widely used at Google, and also at companies such as Facebook and Microsoft. They help with perception---helping Android understand what you're saying, and Facebook know who's photo you just uploaded. But until now, nobody has really matched Google's success at merging deep learning with reinforcement learning---those are algorithms that make the software improve over time, using a system of rewards.
By merging these two techniques, Google has built a “a general-learning algorithm that should be applicable to many other tasks," says Koray Kavukcuoglu, a Google researcher. The DeepMind team says they're still scoping out the possibilities, but clearly improved search and smartphone apps are on the radar.
But there are other interesting areas as well. Google engineering guru Jeff Dean says that AI techniques being explored by Google---and other companies---could ultimately benefit the kinds of technologies that are being incubated in the Google X research labs. "There are potential application in robots and self-driving-car kinds of things," he says. "Those are all things where computer vision is pretty important." Google says that its AI software, which it's dubbed the "Deep Q network agent," got 75 percent of the score of its professional tester in 29 of the 49 games it tried out. It did best in Video Pinball.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Deep Q works best when it lives in the moment---bouncing balls in Break Out, or trading blows in video boxing---but it doesn't do so well when it needs to plan things out in the long-term: climbing down ladders and then jumping skeletons in order to retrieve keys in Montezuma's Revenge , for example. Poor old Deep Q scored a big fat zero in that game.
But as it improves, the DeepMind work "could be the driving technology for robotics," says Itamar Arel, an artificial intelligence researcher who, like the DeepMind folks, is working on ways to merge deep learning with deep reinforcement techniques. He believes that DeepMind's technology is about 18 to 24 months away from the point where it could be used to experiment with real-world robots---and Google has its fair share of robots to test on, including the dog-like Boston Dynamics 1 machines it acquired in 2013.
The Nature paper doesn't describe any new technical breakthroughs, but it shows what happens when the DeepMind techniques are used on a much broader scale. "We used much bigger neural networks, we came up with better training regimes... and trained the systems for longer," says Demis Hassabis, DeepMind's founder. In 2013, DeepMind described "very early preliminary sample results," he says, "these are the full results complete with a whole bunch of careful controls and benchmarks." Hassabis won't tell us whether Google is running robot simulations too, but it's clear that the Atari 2600 work is only the beginning. "I can't really comment on our current work, but we are indeed running simulations of all kinds of games and environments," he says.
Additional reporting by Marcus Woo and Cade Metz 1 Correction: 02:26:2015 10:00 EST This story originally mis-identified the Google robotics company Boston Dynamics as Boston Robotics.
Senior Writer X Topics artificial intelligence Atari Enterprise Google video games Will Knight Will Knight Will Knight Susan D'Agostino Christopher Beam Niamh Rowe Steven Levy Vittoria Elliott Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
765 | 2,013 | "Why Some Startups Say the Cloud Is a Waste of Money | WIRED" | "https://www.wired.com/wiredenterprise/2013/08/memsql-and-amazon" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Why Some Startups Say the Cloud Is a Waste of Money Eric Frenkiel.
Photo: Alex Washburn/WIRED Save this story Save Save this story Save Eric Frenkiel is through with convention and conformity. It was just too expensive.
In Silicon Valley, tech startups typically build their businesses with help from cloud computing services -- services that provide instant access to computing power via the internet -- and Frenkiel's startup, a San Francisco outfit called MemSQL , was no exception. It rented computing power from the granddaddy of cloud computing, Amazon.com.
But in May, about two years after MemSQL was founded, Frenkiel and company came down from the Amazon cloud, moving most of their operation onto a fleet of good old fashioned computers they could actually put their hands on. They had reached the point where physical machines were cheaper -- much, much cheaper -- than the virtual machines available from Amazon. "I'm not a big believer in the public cloud," Frenkiel says. "It's just not effective in the long run." >'I'm not a big believer in the public cloud. It's just not effective in the long run.' Eric Frenkiel Frenkiel's story shows that while cloud computing is suited to many tasks -- including getting your startup off the ground or running a modest website -- it doesn't make sense for others. When Zynga's online gaming empire expanded to epic sizes in 2012, the company made headlines in shifting much of its operation off the Amazon cloud and into its own data centers, but smaller operations are making the move too.
Like MemSQL, the ride-sharing startup Uber recently moved most of its tech off the Amazon cloud, according to the company that now houses its physical servers, Peak Hosting.
And various others, from analytics outfit Mixpanel to online clothes-trading startup Tradesy , have disclosed similar shifts.
"I don't know how much this is written about," says Kit Colbert, an engineer at VMware, whose software is used by cloud services as well as in private data centers. "Within IT departments, public clouds do tend to get more expensive over time, especially when you reach a certain scale." Three years ago, Frenkiel and MemSQL tapped Amazon Web Services, or AWS, for the computing power they needed to build and test the software product at the heart of the company, a kind of new-age database.
Renting virtual servers from Amazon was more convenient than buying a fleet of physical machines, and the prices seemed reasonable -- not to mention the $10,000 in Amazon credits that MemSQL received through its seed funder, Y Combinator. "When you're lean and just getting started," Frenkiel says, "it's obviously the way to go." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But then, early this year, his Amazon bill started to rise.
Anders Papitto, an engineer at San Francisco startup MemSQL. Once, he used Amazon servers to build and test software in the cloud. But now, like every other engineer at MemSQL, he uses the company's private fleet of computer servers.
MemSQL's database product runs across tens and even hundreds of servers, and as the company started testing the software on an ever larger number of Amazon virtual machines, Frenkiel and company realized the cloud no longer made sense -- at least not for the task at hand.
This past April, MemSQL spent more than $27,000 on Amazon virtual servers. That's $324,000 a year. But for just $120,000, the company could buy all the physical servers it needed for the job -- and those servers would last for a good three years. The company will add more machines over that time, as testing needs continue to grow, but its server costs won't come anywhere close to the fees it was paying Amazon.
Frenkiel estimates that, had the company stuck with Amazon, it would have spent about $900,000 over the next three years. But with physical servers, the cost will be closer to $200,000. "The hardware will pay for itself in about four months," he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One of the perks of the Amazon cloud is that you can instantly expand and shrink your pool of machines, paying only for what you need at any given time. That's a great thing if you're building a new company -- or running a website where traffic ebbs and flows. But MemSQL reached the point where its workload was relatively constant, where it was using pretty much the same number of virtual servers around the clock.
"The public cloud is phenomenal if you really need its elasticity," Frenkiel says. "But if you don't -- if you do a consistent amount of workload -- it's far, far better to go in-house." John Hall, the chief technology officer and technical co-founder at the Santa Monica, California-based Tradesy, recently came to a similar realization. "We've got only seven servers, and we've got a tremendous amount of computing power for the price," Hall says. "Versus what we'd get on the cloud, it's somewhere between 70 and 100 times cheaper." >'I don't know how much this is written about. Within IT departments, public clouds do tend to get more expensive over time, especially when you reach a certain scale.' Kit Colbert San Francisco's Mixpanel began on Rackspace, Amazon's main rival in the cloud game, but as its operation began to grow, it too soured on the cloud. The company's main issue was that it was forced to share resources with so many other companies on the Rackspace cloud, a setup that led to serious slowdowns. “We just couldn’t get consistent performance on the machines, because other people were on them,” says Mixpanel founder Suhail Doshi.
It's not an uncommon complaint. In fact, John Engates, the chief technology officer at Rackspace, agrees that some tasks are best handled on in-house hardware (which you can also lease from Rackspace). "Web servers belong in the public cloud," he says. "But things like databases -- that need really high performance, in terms of [input and output] and reading and writing to memory -- really belong on bare-metal servers or private setups." This is not to say that every growing operation will move onto physical servers. Amazon declined to comment for this story, but its cloud business is booming. The convenience of its services often outweighs the problems. The elastic nature of the cloud is one advantage, but these services can also provide a means of ready backup when data centers go dark.
Geolocation outfit Geoloqi moved off of Amazon in 2011 -- but then moved back a year later. "We reached a point where we needed to be able to scale faster than would have been practical with physical servers," says founder Aaron Parecki, before adding that Geoloqi's new parent company runs everything on the Amazon cloud.
Frenkiel and MemSQL still use Amazon for certain tasks -- their monthly bill has dropped to about $6,000 -- and that's often the case with companies that go physical. In some cases, the cloud works. In others, it doesn't.
Senior Writer X Topics Amazon Cloud Computing Enterprise Infrastructure Rackspace Uber Nelson C.J.
Peter Guest Andy Greenberg Steven Levy Will Knight Joel Khalili Kari McMahon David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
766 | 2,012 | "If Xerox PARC Invented the PC, Google Invented the Internet | WIRED" | "https://www.wired.com/wiredenterprise/2012/08/google-as-xerox-parc/all" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business If Xerox PARC Invented the PC, Google Invented the Internet Save this story Save Save this story Save The truth about Jeff Dean appeared on April Fool's Day 2007.
Somewhere inside Google, a private website served up a list of facts about Dean, one of Google's earliest employees and one of the main reasons the web giant handles more traffic than any other operation on the net. The site was only available to Googlers, but all were encouraged to add their own Jeff Dean facts. And many did.
"Jeff Dean once failed a Turing test when he correctly identified the 203rd Fibonacci number in less than a second," read one.
"Jeff Dean compiles and runs his code before submitting," read another, "but only to check for compiler and CPU bugs." "The speed of light in a vacuum used to be about 35 mph," said a third. "Then Jeff Dean spent a weekend optimizing physics." No, these facts weren't really facts. But they rang true. April Fool's Day is a sacred occasion at Google, and like any good April Fool's joke, the gag was grounded in reality. A Google engineer named Kenton Varda set up the website, playing off the satirical Chuck Norris facts that so often bounce around the net, and when he mailed the link to the rest of the company, he was careful to hide his identity. But he soon received a note from Jeff Dean, who had tracked him down after uncovering the digital footprints hidden in Google's server logs.
Inside Google, Jeff Dean is regarded with awe. Outside the company, few even know his name. But they should. Dean is part of a small group of Google engineers who designed the fundamental software and hardware that underpinned the company's rise to the web's most dominant force, and these creations are now mimicked by the rest of the net's biggest names – not to mention countless others looking to bring the Google way to businesses beyond the web.
>"Google did a great job of slurping up some of the most talented researchers in the world at a time when places like Bell Labs and Xerox PARC were dying. It managed to grab not just their researchers, but their lifeblood." \- Mike Miller Time and again, we hear the story of Xerox PARC , the Silicon Valley research lab that developed just about every major technology behind the PC revolution, from the graphical user interface and the laser printer to Ethernet networking and object-oriented programming. But because Google is so concerned with keeping its latest data center work hidden from competitors – and because engineers like Jeff Dean aren't exactly self-promoters – the general public is largely unaware of Google's impact on the very foundations of modern computing. Google is the Xerox PARC of the cloud computing age.
"Google did a great job of slurping up some of the most talented researchers in the world at a time when places like Bell Labs and Xerox PARC were dying," says Mike Miller, an affiliate professor of particle physics at the University of Washington and the chief scientist of Cloudant , one of the many companies working to expand on the technologies pioneered by Google. "It managed to grab not just their researchers, but also their lifeblood." These Google technologies aren't things you can hold in your hand – or even fit on your desk. They don't run on a phone or a PC. They run across a worldwide network of data centers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg They include sweeping software platforms with names like the Google File System , MapReduce , and BigTable , creations that power massive online applications by splitting the work into tiny pieces and spreading them across thousands of machines, much like micro-tasks are parceled out across a massive ant colony. But they also include new-age computer servers, networking hardware, and data centers that Google designed to work in tandem with this software. The idea is to build warehouse-sized computing facilities that can think like a single machine. Just as an ant colony acts as one entity, so does a Google data center.
While Silicon Valley stood transfixed by social networks and touch screens, Google remade the stuff behind the scenes, and soon, as the other giants of the web ran into their own avalanche of online data, they followed Google's lead. After reinventing Google's search engine, GFS and MapReduce inspired Hadoop , a massive number-crunching platform that's now one of the world's most successful open source projects. BigTable helped launch the NoSQL movement , spawning an army of web-sized databases. And in so many ways, Google's new approach to data center hardware sparked similar efforts from Facebook, Amazon, Microsoft, and others.
To be sure, Google's ascendance builds on decades of contributions from dozens of equally unheralded computer scientists from many companies and research institutions, including PARC and Bell Labs. And like Google, Amazon was also a major influence on the foundations of the net – most notably through a research paper it published on a file system called Dynamo. But Google's influence is far broader.
The difference between it and a Xerox PARC is that Google profited mightily from its creations before the rest of the world caught on. Tools like GFS and MapReduce put the company ahead of the competition, and now, it has largely discarded these tools, moving to a new breed of software and hardware. Once again, the rest of the world is struggling to catch up.
Google's Twin Deities Kenton Varda could have targeted several other Google engineers with his April Fool's Day prank. Jeff Dean just seemed like "the most amusing choice," Varda remembers. "His demeanor was perhaps the furthest from what you'd expect in a deity." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The obvious alternative was Sanjay Ghemawat, Dean's longtime collaborator. In 2004, Google published a research paper on MapReduce, the number-crunching platform that's probably the company's most influential data center creation, and the paper lists two authors: Dean and Ghemawat. The two engineers also played a major role in the design of the BigTable database. And Ghemawat is one of three names on the paper describing the first Google File System, a way of storing data across the company's vast network of data centers.
Even for Varda, who works on the team that oversees Google's infrastructure, the two engineers are difficult to separate. "Jeff and Sanjay worked together to develop much of Google's infrastructure and have always seemed basically joined at the hip," says Varda. "It's often hard to distinguish which of them really did what.
>"All code changes at Google require peer review prior to submission, but in Jeff and Sanjay's case, often one will send a large code review to the other, and the other will immediately 'LGTM' it, because they wrote the change together in the first place." \- Kenton Varda "All code changes at Google require peer review prior to submission, but in Jeff and Sanjay's case, often one will send a large code review to the other, and the other will immediately 'LGTM' it, because they wrote the change together in the first place." LGTM is Google-speak for "looks good to me." Varda means this quite literally. Over the years, Dean and Ghemawat made a habit of coding together while sitting at the same machine. Typically, Ghemawat does the typing. "He's pickier about his spacing," Dean says.
The two met before coming to Google. In the '90s, both worked at Silicon Valley research labs run by the Digital Equipment Corporation, a computing giant of the pre-internet age. Dean was at DEC's Western Research Lab in Palo Alto, California, and Ghemawat worked two blocks away, at a sister lab called the Systems Research Center. They would often collaborate on projects, not only because Dean had a thing for the gelato shop that sat between the two labs, but because they worked well together. At DEC, they helped build a new compiler for the Java programming language and a system profiler that remade the way we track the behavior of computer servers.
They came to Google as part of a mass migration from DEC's research arm. In the late-'90s, as Google was just getting off the ground, DEC was on its last legs. It made big, beefy computer servers using microprocessors based on the RISC architecture, and the world was rapidly moving to low-cost machines equipped with Intel's x86 chips. In 1998, DEC was acquired by computer giant Compaq. Four years later, Compaq merged with HP. And the top engineers from DEC's vaunted research operation gradually moved elsewhere.
"DEC labs were going through a bit of rocky period after the Compaq acquisition," Dean says, "and it wasn't exactly clear what role research would have in the merged company." Some engineers went to Microsoft, which was starting a new research operation in Silicon Valley. Some went to a Palo Alto startup called VMware, whose virtual servers were about to turn the data center upside-down.
And many went to Google, founded the same year DEC was acquired by Compaq.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It was a time when several of the tech world's most influential research labs were losing steam, including Xerox PARC and Bell Labs, the place that produced such important technologies as the UNIX operating system and the C programming language.
But although these labs had already seen their best days, many of their researchers would feed a new revolution.
"At the time of the bubble burst in 2001, when everyone was downsizing, including DEC, the main two high-tech companies that were hiring were Google and VMware," says Eric Brewer, the University of California at Berkeley computer science professor who now works alongside Dean and Ghemawat. "Because of the crazy lopsidedness of that supply and demand, both companies hired many truly great people and both have done well in part because of that factor." >"At the time of the bubble burst in 2001, when everyone was downsizing, including DEC, the main two high-tech companies that were hiring were Google and VMware." \- Eric Brewer Like Dean and Ghemawat, several other engineers who arrived at Google from DEC would help design technologies that caused a seismic shift in the web as a whole, including Mike Burrows, Shun-Tak Leung, and Luiz André Barroso.
At the time, these engineers were just looking for interesting work – and Google was just looking for smart people to help run its search engine. But in hindsight, the mass migration from DEC provides the ideal metaphor for the changes Google landed on the rest of the world.
DEC was one of the first companies to build a successful web search engine – AltaVista, which came out of the Western Research Lab – and at least in the beginning, the entire thing ran on a single DEC machine.
But Google eclipsed AltaVista in large part because it turned this model on its head. Rather than using big, beefy machines to run its search engine, it broke its software into pieces and spread them across an army of small, cheap machines. This is the fundamental idea behind GFS, MapReduce, and BigTable – and so many other Google technologies that would overturn the status quo.
In hindsight, it was a natural progression. "The architecture challenges that arise when building a data system like Google's that spans thousands of computers isn't all that different from the challenges that arise in building a sophisticated monolithic system," says Armando Fox , a professor of computer science at the University of California, Berkeley who specializes in large-scale computing. "They problems wear very similar clothing, and that's why it was essential to have people with experience at places like DEC." Jeff Dean Follows His Uncle to Google Jeff Dean was the first to arrive from DEC. He came by way of his "academic uncle," Urs Hölzle.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hölzle was one of Google's first 10 employees, and as the company's first vice president of engineering, he oversaw the creation of the Google infrastructure , which now spans more than 35 data centers across the globe, judging from outside sources. He joined Google from a professorship at the University of California at Santa Barbara, and before that, he studied at Stanford under a prof named David Ungar , developing some of the core technologies used in today's compilers for the Java programming language.
Dean's academic adviser also studied with Ungar, and this made Hölzle his academic uncle. In 1999, with DEC in its death throes, Dean left the company for a startup called MySimon, but when he saw Hölzle turn up at Google, he sent an email looking for a new Google job of his own. He was soon hired by the same man who hired Hölzle: Google co-founder Larry Page.
At first, Dean was charged with building an ad system for Google's fledgling search engine. But after a few months, he moved onto the company's core search technologies, which were already buckling under the weight of a rapidly growing worldwide web. He was soon joined by Ghemawat, who made the move to Google in large part because Dean and other DEC researchers – Krishna Bharat and Monika Henzinger – were already on board.
"It's fairly likely that I might never have interviewed at Google if Jeff hadn't been there," Ghemawat says. They quickly picked up where they left off at DEC. Over the next three or four years, together with an ever changing group of other engineers, the two engineers designed and built multiple revisions of the company's core systems for crawling the web, indexing it, and serving search results to users across the globe.
Yes, they would often code at the same machine – while drinking an awful lot of coffee. Cappuccino is their drug of choice. Their partnership works, Dean says, because Ghemawat is more level-headed. "I tend to be very impatient, thinking about all the ways we can do something, my mind and hands spinning at a very fast rate. Sanjay gets excited, but in a more subdued way. He corrects my course, so that we end up moving in the right direction." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Ghemawat says Dean's approach is just as important. He keeps them moving forward. "I often get down, thinking about all the different ways of doing something, worrying about the right way," Ghemawat says. "It's good to have someone with the energy and excitement needed to get to the end goal." The big breakthroughs came with the creation of the Google File System and MapReduce, which rolled out across Google's data centers in the early part of the last decade. These platforms provided a more reliable means of building the massive index that drives Google's search engine. As Google crawled the world’s webpages, grabbing info about each, it could spread this data over tens of thousands of servers using GFS, and then, using MapReduce, it could use the processing power inside all those servers to crunch the data into a single, searchable index.
>"What do you do when your job is to take the entire internet, index it, and make a copy of it – and not do it in a way that the copy is the same size as the internet? That's a pretty interesting technical challenge." \- Jason Hoffman The trick is that these platforms didn't break when machines failed or the network slowed. When you're dealing with ten of thousands of ordinary servers as Google was, machines fail all the time. With GFS and MapReduce, the company could duplicate data on multiple machines. If one broke, another was there to step in.
"The scale of the indexing work made it complicated to deal with machine failures and delays, so we started looking for abstractions that would allow for automatic parallelization across a collection of machines – to give higher performance and scalability – and could also make long-running computations that ran on thousands of machines robust and reliable," Jeff Dean says, in describing the thinking behind MapReduce. Once these tools were in place on the search engine, he explains, Google realized they could help run other web services too.
BigTable arose in similar fashion. Like MapReduce, it ran atop the Google File System, but it didn't process data. It operated as a massive database. "It manages rows of data," Dean says, "and spreads them across more and more machines as you need it." It didn't give you as much control over the data as a traditional relational database, but it could handle vast amounts of information in ways you couldn't with platforms designed for a single machine.
The same story appears again and again. As it grew, Google faced an unprecedented amount of data, and it was forced to build new software.
"What do you do when your job is to take the entire internet, index it, and make a copy of it – and not do it in a way that the copy is the same size as the internet? That's a pretty interesting technical challenge," says Jason Hoffman, the chief technology officer at cloud computing outfit Joyent.
"Very often the hammer swinger knows how to make the hammer. Most things that are innovative come from a forge. They come from those points where you're facing failure." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Data Center Empire Built on Crème Brûlée Luiz André Barroso followed Jeff Dean and Sanjay Ghemawat from DEC to Google. But he almost didn't.
Barroso had worked alongside Dean at DEC's Western Research Lab, and in 2001, he was weighing job offers from Google and VMware. After visiting and interviewing with both companies, he put together a spreadsheet listing the reasons to join each. But the spreadsheet ended in a dead heat: 122 reasons for Google, and 122 for VMware.
Then he talked to Dean, who asked whether the spreadsheet included the crème brûlée served by executive chief Charlie Ayers the day he visited Google. "Crème brûlée is his absolute favorite," Dean remembers. "I asked if he had factored it into his 122-point list, and he said: 'No! I forgot!'" Barroso accepted Google's job offer the next morning.
Barroso was unusual in that he wasn't necessarily a software engineer. At DEC, he helped pioneer multicore processors – processors that are actually many processors in one. But after Barroso briefly worked on Google software, Hölzle put him in charge of an effort to overhaul Google's hardware infrastructure, including not only its servers and other computing gear, but the data centers housing all that hardware. "I was the closest thing we had to a hardware person," Barroso remembers.
>"Crème brûlée is his absolute favorite. I asked if he had factored it into his 122-point list, and he said: 'No! I forgot!'" \- Jeff Dean Hölzle, Barroso, and their "platforms team" began by rethinking the company's servers. In 2003, rather purchase standard machines from the likes of Dell and HP, the team started cutting costs by designing their own servers and then contracting with manufacturers in Asia to build them – the same manufacturers who were building gear for the Dells and the HPs. In short, Google cut out the middle men.
Uniquely, each Google machine included its own 12-volt battery that could pick up the slack if the system lost its primary source of power. This, according to Google, was significantly more efficient that equipping the data center with the massive UPSes – uninterruptible power supplies – that typically provide backup power inside the world’s computing facilities.
Then the team went to work on the data centers housing these servers. At the time, Google merely leased data center space from other companies. But Barroso and crew started from scratch, designing and building their own data centers in an effort to save money and power, but also to improve the performance of Google's web services.
The company began with a new facility in The Dalles, Oregon, i.e. the rural area where it could tap into some cheap power – and some serious tax breaks. But the main goal was to build an entire data center that behaved like a single machine. Barroso and Hölzle call it "warehouse-scale computing." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of internet service performance, something that can only be achieved by a holistic approach to their design and deployment,” Barroso and Hölzle in their seminal 2009 book on the subject, The Datacenter as a Computer.
"In other words, we must treat the data center itself as one massive warehouse-scale computer.” They designed the facility using a new kind of building block. They packed servers, networking gear, and other hardware into standard shipping containers – the same kind used to transport goods by boat and train – and these data center "modules" could be pieced together into a much larger facility. The goal was to maximize the efficiency of each module. Apparently, the notion came to Larry Page in 2003, when he saw the Internet Achieve give a presentation on its plans for similar modules – though Barroso doesn't remember where the idea came from. "Other than it wasn’t me," he says.
The company's facility in The Dalles went live in 2005. Over the years, there were rumors of data center modules and custom servers, but the details remained hidden until 2009, when Google held a mini-conference at its Silicon Valley headquarters. In the data center, Google isn't content to merely innovate. It keeps the innovations extremely quiet until it's good and ready to share them with the rest of the world.
The Tesla Effect Larry Page has a thing for Nikola Tesla. According to Steven Levy's behind the scenes look at Google – In The Plex – Page regarded Tesla as an inventor on par with Edison, but always lamented his inability to turn his inventions into profits and long-term recognition.
Clearly, the cautionary tale of Nicola Tesla influenced the way Google handles its core technologies. It treats them as trade secrets, and much like Apple, the company has a knack for keeping them secret. But in some cases, after a technology runs inside Google for several years, the company will open the kimono. "We try to be as open as possible – without giving up our competitive advantage," says Hölzle. "We will communicate the idea, but not the implementation." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2003 and 2004, the company published papers on GFS and MapReduce.
Google let the papers speak for themselves, and before long, a developer named Doug Cutting used them to build an indexing system for an open source search engine he called Nutch. After Cutting joined Yahoo – Google's primary search rival at the time – the project morphed into Hadoop.
>"We try to be as open as possible – without giving up our competitive advantage. We will communicate the idea, but not the implementation." \- Urs Hölzle A way of crunching epic amounts of data across thousands of servers, Hadoop has long been used by the other giants of the web, including Facebook, Twitter, and Microsoft, and now, it's spreading into other businesses. By 2016, according to research outfit IDC, the project will fuel a $813 million software market.
History repeated itself with BigTable. In 2006, Google published a paper on its sweeping database, and together with an Amazon paper describing a data store called Dynamo, it spawned the NoSQL movement, a widespread effort to build databases that could scale to thousands of machines.
"If you look at every NoSQL solution out there, everyone goes back to the Amazon Dynamo paper or the Google BigTable paper," says Joyent's Jason Hoffman. "What would the world be like if no one at Google or Amazon ever wrote an academic paper?" Google's hardware operation is a slightly different story. We still know relatively little about the inside of Google's data centers, but the company's efforts to design and build its own gear has undoubtedly inspired similar efforts across the web and beyond. Facebook is now designing its own servers , server racks, and storage equipment, with help from manufacturers in Asia. According to outside sources , the likes of Amazon and Microsoft are doing much the same. And with Facebook "open sourcing" its designs under the aegis of the Open Compute Foundation, many others companies are exploring similar hardware.
What's more, modular data centers are now a mainstay on the web. Microsoft uses them, as do eBay and countless others. Mike Manos, Microsoft's former data center guru, denies that Google was the inspiration for the move to modular data center, pointing out that similar modules date back to the 1960s, but it was Google that brought the idea to forefront. As Cloudant's Mike Miller points out, GFS and MapReduce also depend on ideas from the past. But Google has knack for applying these old ideas to very new problems.
Google's Past Is Prologue The irony is that Google has already replaced many of these seminal technologies. Over the past few years, it swapped out GFS for a new platform dubbed " Colossus ," and in building its search index, it uses a new system known as Caffeine , which includes piece of MapReduce and operates in a very different way, updating the index in realtime rather than rebuilding the thing from scratch.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google may still use data center modules in The Dalles, but it seems they no longer play a role in its newer facilities. We don't know much about what the company's now uses inside these top secret facilities, but you can bet its a step ahead of what it did in the past.
In recent years, Google published papers on Caffeine and two other sweeping software platforms that underpin its services: Pregel, a "graph" database for mapping relationships between pieces of data, and Dremel, a means of analyzing vast amounts data at super high speeds. Multiple open source projects are already working to mimic Pregel. At least one is cloning Dremel. And Cloudant's Miller says Caffeine – aka Percolator – is sparking changes across the Hadoop and NoSQL markets.
These are just the some latest creations in use at Google. No doubt, there are many others we don't know about. But whatever Google is using now, it will soon move on. In May of last year, University of California at Berkeley professor Eric Brewer announced he was joining the team building Google's "next gen" infrastructure. "The cloud is young," he said. "Much to do. Many left to reach." >"The Google infrastructure work wasn't really seen as research. It was about how do we solve the problems we're seeing." \- Sanjay Ghemawat Brewer – one of the giants of distributed computing research – is yet another sign that Google is the modern successor to Xerox PARC. But the company also takes the PARC ethos a step further.
You can trace Google's research operation through DEC, all the way back to PARC's earliest days. The DEC Systems Research Center was founded by Robert Taylor, the same man who launched the computer science laboratory at PARC.
Taylor started the SRC because he felt that by the early 80s, PARC had lost its way. "A lot of people who I worked with at PARC we as disenchanted with PARC as I was," he says. "So they joined me." He worked to build the lab in the image of the old PARC Computer Science Lab – even in terms of its physical setup – and in some ways, he succeeded.
But it suffered from the same limitations as so many corporate research operations. It took ages to get the research into the marketplace. This was also true at the DEC Western Research Lab, where Jeff Dean worked. And this is what brought him to Google. "Ultimately, it was this frustration of being one level removed from real users using my work that led me to want to go to a startup," Dean says.
But Google wasn't the typical startup. The company evolved in a way that allowed it to combine the challenge of research with the satisfaction of instantly putting the results into play. Google was a research operation – and yet it wasn't. "The Google infrastructure work wasn't really seen as research," Ghemawat says. "It was about how do we solve the problems we're seeing in production." For some, the drawback of working on Google's core infrastructure is that you can't tell anyone else what you're doing. This is one of the reasons an engineer named Amir Michael left Google to build servers at Facebook. But, yes, there are times when engineers are let loose to publish their work or even discuss it in public.
For Google, it's balancing act. Though some are critical of the particular balance, it's certainly working for Google. And there's no denying its methods have pushed the rest of the web forward. PARC never had it so good.
Senior Writer X Topics Amazon caffeine Cloud Computing data databases Dremel Enterprise Facebook Google Hardware History Infrastructure maps Microsoft networks platforms secret servers Servers software Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
767 | 2,012 | "Google Shaman Explains Mysteries of 'Compute Engine' | WIRED" | "https://www.wired.com/wiredenterprise/2012/07/google-compute-engine" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Google Shaman Explains Mysteries of 'Compute Engine' yukop /Flickr Save this story Save Save this story Save Google started work on the Google Compute Engine over a year and a half ago, and it was all Peter Magnusson could do to keep his mouth shut.
Magnusson is the director of engineering for Compute Engine's sister service, Google App Engine , and over the past 18 months, as he spoke at various conferences and chatted with various software developers about Google's place in the world of cloud computing, he couldn't quite explain how serious the company is about competing with Amazon's massively popular Elastic Compute Cloud and other commercial services that seek to reinvent the way online applications are built and operated.
Google entered the cloud computing game back in 2008, when it unveiled Google App Engine, a service that lets outside software developers build and host applications atop the same sweeping infrastructure that runs Google's own web services, such as Google Search and Gmail. Like Amazon's cloud, this is a way of running online applications without setting up your own data center infrastructure. But it was difficult to tell whether the service was just one of those half-hearted Google experiments that would one day fall by the wayside. Though the service let you automatically accommodate an infinite amount of traffic -- or thereabouts -- it put tight restrictions on what programmers could and couldn't do, and this seemed to limit its appeal.
Last fall, Google signaled its intent when it removed the "beta test" tag from Google App Engine and launched Google Cloud Storage, a separate service dedicated to housing large amounts of data. But all the pieces fell into place last week when the company uncloaked Compute Engine , a service that gives developers access to hundreds of thousands of raw virtual machines at a moment's notice.
"Google Compute Engine gives you Linux virtual machines at Google-scale. You can spin up two VMs or 10,000 VMs," said Urs Hölzle, the man who oversees Google's vast infrastructure. "You benefit from the efficiency of Google's data centers and our decade of experience running them." What this means is that developers and businesses can grab a vast amount of processing power and apply it to almost any task they want. Google is not only offering App Engine -- a service that lets you build applications without having to worry about raw storage and processing power -- it's also giving you, well, raw storage and processing power. In other words, it's going head-to-head with Amazon, the undisputed king of commercial cloud services that has long offered such raw resources as well as "higher level" services for building and running massive applications.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "We're pairing Compute Engine with App Engine," says Peter Magnusson. "But, increasingly, they will be able to work together." Google pioneered the art of the "cloud" infrastructure. But Amazon beat it to the idea of sharing such an infrastructure with the rest of the world. Six years after Amazon first offered its web services to outside developers and businesses, Google is still playing catch-up. But it's intent on making up that lost ground.
Google showed just how much it believes in Compute Engine, Magnusson says, when it tapped Hölzle to introduce the thing at its annual developer conference in San Francisco. Hölzle is the former UC Santa Barbara computer science professor who joined Google in early 1999 to oversee the growth on its internal network. At the time, the company had fewer than 10 employees, but he ended up building a worldwide network of data centers that are among the most advanced on Earth.
Google vice president Sundar Pichai calls Hölzle "the person -- more than anyone -- responsible for building all of Google's infrastructure." Hölzle rarely speaks in public -- Google views its data center infrastructure as a trade secret best kept hidden from competitors -- but there he was on Thursday, on stage at Google I/O, showing off Google Compute Engine. He sat down with Wired as well, bringing a shaman-like air to the discussion of data center design.
He wears wire-rimmed glasses, a diamond stud earring, a closely cropped beard, and a slight uplift of dark hair tinged with gray, and -- having grown up in Switzerland -- he speaks with just a hint of an accent. When another Googler mispronounces his name, he says it's to be expected. "There's an old joke," he says. "During World War II, all the other countries knew the password for the Swiss army. But it didn't matter because they couldn't pronounce it." Urs Hölzle.
Image: Urs Hölzle Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Compute Engine, Hölzle tells us, is a natural extension of the infrastructure he and his team have spent the last 13 years piecing together. Google hasn't just set up some new machines and tossed on some hypervisor software that runs virtual machines. Like App Engine before it, Compute Engine runs atop the unified software platform that spans Google's roughly 40 data centers worldwide.
Hölzle and company describe the Google infrastructure as " warehouse-scale computing.
" The idea is that each data center -- running a common software platform -- behaves like a single machine, running massive online applications and providing these applications with additional resources as needed. Google Compute Engine was built atop its existing software platform, taking advantage all the work that came before.
"Compute Engine benefits from a lot of the code we've already written," Hölzle says. "If you look at the product and you look at the lines of code that had to be written for it to work, 80 or 90 percent of it is what we had already written for our internal infrastructure." Google has publicly discussed part of its overarching software platform but not others. Hölzle declines to go into much detail, but he does say that Compute Engines runs atop Google's existing "server cluster management" service, which has long allowed Google internal engineers to rope together CPU power and memory from across its network of servers and apply it to the task at hand. According to M.C. Srivas -- a former Google engineer -- this service is known as Borg.
What Hölzle will say is that Compute Engine was built using the KVM hypervisor, open source software that was built to run virtual machines atop the Linux operating system. KVM, or Kernel-based Virtual Machine, is a little different from the XEN hypervisor that underpins Amazon's service or the VMware vSphere hypervisor that drives applications inside so many other data centers. Whereas vSphere and Xen run right on the server hardware, KVM runs inside an existing operating system at the "user level," meaning it operates much like any other piece of software running on the OS.
In short, Google has added Compute Engine atop its sea of Linux machines in much the same way it would add any other service. "To our cluster management system, KVM just looks like another task, such as a search task," says Hölzle. "That's what lets us reuse a lot of our existing infrastructure." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The result, according to Hölzle, is that -- compared to competitors -- Google Compute can provide 50 percent more compute power at the same cost. "You don't have to choose between scalability and price," he says, arguing that Google is far more adept at getting those raw virtual machines to work in concert and solve a common task.
Amazon did not respond to a request for comment. But Jason Hoffman -- the chief technology officer at Joyent, a cloud computing outfit that also uses the KVM hypervisor to serve up virtual machines across the net -- disputes Hölzle's cost-per-dollar claims, saying that the Google's price list indicates that Compute Engine actually more expensive than Amazon or Joyent. "I just don't get it," he says.
Mathew Lodge -- vice president of cloud services at VMware, which offers a software platform called vCloud that lets outside outfits build services similar to Joyent and AWS and Google Compute Engine -- also questions how reliable Google's service will be. He claims that vCloud services are less susceptible to downtime because it can be updated on the fly, while virtual machines are still running. But as with its other services, Google offers a service level agreement that promises 99.95 percent uptime, and at least in recent years, some Google services, including Gmail, have exceeded this guarantee.
Joyent built a new version of the KVM hypervisor for the Sun Solaris-based operating system that underpins its service, and Hoffman has always said that this setup is faster than Xen, the hypervisor used by Amazon. But Simon Crosby, who oversaw the creation of Xen, will tell you that any performance advantage is minimal and that it continues to shrink.
The various players also disagree on whose hypervisor is more secure, but Hölzle's primary argument for Google Compute Engine -- after running this sort of thing for more than 10 years within the company -- is that Google has more experience than most.
However Google Compute Engine compares to the competition, Google is intent on making up lost ground against Amazon, whose services now run as much as 1 percent of the internet, according to one estimate.
Compute Engine won't replace App Engine. It will complement App Engine. "You can use one or the other or both," says Greg D’alesandre, who oversees App Engine. "We offered App Engine for a while, and what we realized is that every once in a while, there are going to be things that are simpler and more straightforward to do with VMs than to do with App Engine." According to Hölzle, App Engine is now running over 1 million active applications, handling 7.5 billion hits a day and 2 trillion data store operations a month. This makes the service "the largest public NoSQL data store infrastructure in the world," referring to the new-age database model that spreads vast amounts of information across a sea of distributed machines. But with Compute Engine, the company wants to tackle more than NoSQL.
It wants to tackle everything.
Senior Writer X Topics Amazon Enterprise Google Infrastructure Linux platforms Paresh Dave Amanda Hoover Paresh Dave Caitlin Harrington Reece Rogers Gregory Barber Niamh Rowe Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
768 | 2,016 | "The Rise of the Artificially Intelligent Hedge Fund | WIRED" | "https://www.wired.com/2016/01/the-rise-of-the-artificially-intelligent-hedge-fund" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Rise of the Artificially Intelligent Hedge Fund Then One/WIRED Save this story Save Save this story Save Last week, Ben Goertzel and his company, Aidyia, turned on a hedge fund that makes all stock trades using artificial intelligence---no human intervention required. "If we all die," says Goertzel, a longtime AI guru and the company's chief scientist, "it would keep trading." He means this literally. Goertzel and other humans built the system, of course, and they'll continue to modify it as needed. But their creation identifies and executes trades entirely on its own, drawing on multiple forms of AI, including one inspired by genetic evolution and another based on probabilistic logic.
Each day, after analyzing everything from market prices and volumes to macroeconomic data and corporate accounting documents, these AI engines make their own market predictions and then "vote" on the best course of action.
If we all die, it would keep trading.
Ben Goertzel, Aidyia Though Aidyia is based in Hong Kong, this automated system trades in US equities, and on its first day, according to Goertzel, it generated a 2 percent return on an undisclosed pool of money. That's not exactly impressive, or statistically relevant. But it represents a notable shift in the world of finance. Backed by $143 million in funding, San Francisco startup Sentient Technologies has been quietly trading with a similar system since last year. Data-centric hedge funds like Two Sigma and Renaissance Technologies have said they rely on AI. And according to reports, two others---Bridgewater Associates and Point72 Asset Management, run by big Wall Street names Ray Dalio and Steven A. Cohen---are moving in the same direction.
Hedge funds have long relied on computers to help make trades. According to market research firm Preqin , some 1,360 hedge funds make a majority of their trades with help from computer models---roughly 9 percent of all funds---and they manage about $197 billion in total. But this typically involves data scientists---or "quants," in Wall Street lingo---using machines to build large statistical models. These models are complex, but they're also somewhat static. As the market changes, they may not work as well as they worked in the past. And according to Preqin's research, the typical systematic fund doesn't always perform as well as funds operated by human managers (see chart below) Preqin/WIRED Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In recent years, however, funds have moved toward true machine learning, where artificially intelligent systems can analyze large amounts of data at speed and improve themselves through such analysis. The New York company Rebellion Research, founded by the grandson of baseball Hall of Famer Hank Greenberg, among others, relies upon a form of machine learning called Bayesian networks , using a handful of machines to predict market trends and pinpoint particular trades. Meanwhile, outfits such as Aidyia and Sentient are leaning on AI that runs across hundreds or even thousands of machines. This includes techniques such as evolutionary computation, which is inspired by genetics, and deep learning , a technology now used to recognize images, identify spoken words, and perform other tasks inside Internet companies like Google and Microsoft.
The hope is that such systems can automatically recognize changes in the market and adapt in ways that quant models can't. "They're trying to see things before they develop," says Ben Carlson, the author of A Wealth of Common Sense: Why Simplicity Trumps Complexity in Any Investment Plan , who spent a decade with an endowment fund that invested in a wide range of money managers.
This kind of AI-driven fund management shouldn't be confused with high-frequency trading. It isn't looking to front-run trades or otherwise make money from speed of action. It's looking for the best trades in the longer term---hours, days, weeks, even months into the future. And more to the point, machines---not humans---are choosing the strategy.
Though the company has not openly marketed its fund, Sentient CEO Antoine Blondeau says it has been making official trades since last year using money from private investors (after a longer period of test trades). According to a report from Bloomberg , the company has worked with the hedge fund business inside JP Morgan Chase in developing AI trading technology, but Blondeau declines to discuss its partnerships. He does say, however, that its fund operates entirely through artificial intelligence.
The whole idea is to do something no other human—and no other machine—is doing.
The system allows the company to adjust certain risk settings, says chief science officer Babak Hodjat, who was part of the team that built Siri before the digital assistant was acquired by Apple. But otherwise, it operates without human help. "It automatically authors a strategy, and it gives us commands," Hodjat says. "It says: 'Buy this much now, with this instrument, using this particular order type.' It also tells us when to exit, reduce exposure, and that kind of stuff." According to Hodjat, the system grabs unused computer power from "millions" of computer processors inside data centers, Internet cafes, and computer gaming centers operated by various companies in Asia and elsewhere. Its software engine, meanwhile, is based on evolutionary computation---the same genetics-inspired technique that plays into Aidyia's system.
In the simplest terms, this means it creates a large and random collection of digital stock traders and tests their performance on historical stock data. After picking the best performers, it then uses their "genes" to create a new set of superior traders. And the process repeats. Eventually, the system homes in on a digital trader that can successfully operate on its own. "Over thousands of generations, trillions and trillions of 'beings' compete and thrive or die," Blondeau says, "and eventually, you get a population of smart traders you can actually deploy." Though evolutionary computation drives the system today, Hodjat also sees promise in deep learning algorithms---algorithms that have already proven enormously adept at identify images, recognizing spoken words, and even understanding the natural way we humans speak. Just as deep learning can pinpoint particular features that show up in a photo of a cat, he explains, it could identify particular features of a stock that can make you some money.
Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine Facebook Open Sources Its AI Hardware as It Races Google Google Made a Chatbot That Debates the Meaning of Life Goertzel---who also oversees the OpenCog Foundation , an effort to build an open source framework for general artificial intelligence---disagrees. This is partly because deep learning algorithms have become a commodity. "If everyone is using something, it's predictions will be priced into the market," he says. "You have to be doing something weird." He also points out that, although deep learning is suited to analyzing data defined by a very particular set of patterns, such as photos and words, these kinds of patterns don't necessarily show up in the financial markets. And if they do, they aren't that useful---again, because anyone can find them.
For Hodjat, however, the task is to improve on today's deep learning. And this may involve combining the technology with evolutionary computation. As he explains it, you could use evolutionary computation to build better deep learning algorithms. This is called neuroevolution. "You can evolve the weights that operate on the deep learner," Hodjat says. "But you can also evolve the architecture of the deep learner itself." Microsoft and other outfits are already building deep learning systems through a kind of natural selection , though they may not be using evolutionary computation per se.
Whatever methods are used, some question whether AI can really succeed on Wall Street. Even if one fund achieves success with AI, the risk is that others will duplicate the system and thus undermine its success. If a large portion of the market behaves in the same way, it changes the market. "I'm a bit skeptical that AI can truly figure this out," Carlson says. "If someone finds a trick that works, not only will other funds latch on to it but other investors will pour money into. It's really hard to envision a situation where it doesn't just get arbitraged away." Goertzel sees this risk. That's why Aidyia is using not just evolutionary computation but a wide range of technologies. And if others imitate the company's methods, it will embrace other types of machine learning. The whole idea is to do something no other human---and no other machine---is doing. "Finance is a domain where you benefit not just from being smart," Goertzel says, "but from being smart in a different way from others." Senior Writer X Topics artificial intelligence deep learning Enterprise investing machine learning Kari McMahon Steven Levy David Gilbert Jacopo Prisco Will Knight Nelson C.J.
Andy Greenberg Joel Khalili Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
769 | 2,012 | "The Cult of Amazon: How a Bookseller Invented the Future of Computing | WIRED" | "https://www.wired.com/2012/11/amazon-3" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business The Cult of Amazon: How a Bookseller Invented the Future of Computing The view from the headquarters of Amazon Web Services, in downtown Seattle.
Photo: Wired/Mike Kane Save this story Save Save this story Save In most of corporate America, you write the press release when your creation is finished. But at Amazon, you write it before you've even begun.
"If you were pitching something to Jeff Bezos or other senior managers below Jeff, the first thing you did was write a press release for it -- as if it were a product that you were putting out into the world," says Chris Brown, who spent more than three years at the company and remembers joining at least two pitch meetings with Bezos, the company's founder and CEO.
These Bezosian press releases are designed to focus pitches squarely on the needs of the company's customers, and they illustrate a much broader force that drives the Amazon machine. "That's one of the things that impressed me," Brown remembers. "If someone came up with an interesting idea -- if they said: 'Wow, I would find this useful' -- the next follow-on question was: 'Are there customers who would find this useful?'" This is how Brown explains why Amazon -- of all companies -- created the Elastic Compute Cloud, an internet service that has completely changed the face of computing since it debuted a little over six years ago, providing instant access not to an online store or a search engine or an e-mail account, but to a virtually unlimited collection of computing power. Brown was among the many who helped gestate the idea, and he was part of the small team of engineers that built the service at an Amazon satellite office in Cape Town, South Africa.
Yes, Amazon is the world's largest online retailer. It made its name selling books and DVDs and so many other physical goods. But somewhere along the way, as the company worked to build new technologies that would make it easier to run its vast retail operation, Bezos and the rest of the braintrust realized that if Amazon and its partners needed new technology, so did the rest of the world. The result was the Elastic Compute Cloud and various other Amazon Web Services that would make it easier for anyone to run their own operations -- whatever those operatons might be.
With EC2, you can use all that computing power to run just about any software application you like, including a website such as Instagram or Pinterest, or a banking application that simulates credit risk, or a research tool that analyzes the human genome. Rather than loading your software on physical computer servers you've set up in a closet or a data center, you can load it onto virtual servers you've set up in your web browser. And whenever you need more virtual servers, you can have them.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The service debuted in August 2006, just after a complementary offering called S3, which let you store vast amounts of computer data without setting up your own hardware. Six years later, these and other Amazon Web Services run as much as 1 percent of the internet.
But more than that, these services have changed the way we think about computing. In the years since the launch of EC2, the likes of Google and Microsoft and HP and Rackspace have launched similar services, and there are countless other outfits offering to help you build your own EC2.
As the company holds its first conference dedicated to Amazon Web Services , this still seems like an odd success story for an online retailer. "I was shocked that they did it, and not someone else," says David Patterson, a University of California, Berkeley, computer science professor who started using EC2 for various research projects in 2006 or 2007. And, unfortunately, the evolution of these services remains rather murky -- even after conversions with Brown, several other ex-Amazon employees, and Andy Jassy, who wrote the business plan for Amazon Web Services and continues to serve as its "CEO." The story of EC2 is like Rashomon.
Each player saw a different part of the story -- and some may have reason to omit parts they did see.
But you can see Amazon's corporate culture reflected in these seminal creations. And though some say Amazon will struggle to fend off competition from Google and Microsoft, the company is remarkably well suited to playing the cloud game.
Part of the genius of EC2 is that it gave software developers virtual machines that behaved a lot like the physical machines they were familiar with. They could run the same sort of software they had always used. Amazon didn't try to tell the customer what he wanted.
Google and Microsoft released beta versions of similar cloud services in 2008 -- Google App Engine and Windows Azure -- but these big-name competitors failed to completely grasp what made EC2 so successful. App Engine and Azure tried to make it easier to run software in the cloud, but in doing so they restricted what developers were able to do. The learning curve was steeper, and the public never really embraced them in the same way.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After leaving Amazon in 2007, Brown moved on to Microsoft, where he saw Azure develop first hand, and even then, he felt the company was missing the point. "I ranted at some of the architects at Microsoft that they were starting at the wrong end, that they were constraining the sorts of things you could do," he says.
"I actually wrote them an e-mail that said: 'These are the five ways you will be compared to EC2 the day that you launched,' and they were all about having control over things so that you could build and deploy stuff that you already knew, to get jobs done that you already knew." This year, in a kind of tacit admission that Amazon had gotten it right, both Google and Microsoft unveiled services that look a lot more like EC2.
David Patterson argues that Amazon didn't know what would catch on any more than Google or Microsoft. "They were all running a great experiment," he says. And to a certain extent, Brown bears this out. "We had no idea if it was even going to work," he remembers. "As an infrastructure geek, I found it to be an interesting experiment, and I thought that other infrastructure geeks would too, and suddenly, it becomes this giant thing that everybody knows.... I realized that it had changed the world and I thought: 'Wow, this is not what I set out to do." But however the company got here, Amazon is now well positioned to fend off the competition. Jassy says that Amazon didn't necessarily plan it this way, but EC2 and the other Amazon Web Services are businesses of low margins and high volume -- the kind of businesses that Amazon knows better than anyone else trying to play the cloud game.
"Amazon is very good at operating in a low-margin environment, and Jeff is very proud of this," says Chris Pinkham, who oversaw the development of EC2 before leaving the company around the time it launched. "He feels that low margins promote customer loyalty and -- frankly -- inhibit competition. I don't know what Amazon's margins are right now, but there are some significant forces on its side." Senior Writer X Topics Amazon Cloud Computing data EC2 Enterprise Google Microsoft secret servers Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
770 | 2,017 | "Kitty Hawk's Sebastian Thrun Defends Flying Cars | WIRED" | "https://www.wired.com/2017/05/sebastian-thrun-defends-flying-cars-to-me" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Backchannel Sebastian Thrun Defends Flying Cars to Me It looks like a drone mated with a hovercraft, but the Flyer is a first draft of a true flying car.
Courtesy of Kitty Hawk Save this story Save Save this story Save Some years ago, venture capitalist Peter Thiel made a famous complaint about what, in his view, was insufficient swashbuckling in Silicon Valley. “We were promised flying cars,” he wrote, “and instead what we got was 140 characters.” Well, better late than never: We just learned that Kitty Hawk, a company backed by Google c0founder Larry Page, is working on the Flyer, a first draft of the flying car for which Thiel and other tech magnates have been so ardently pining. The prototype Kitty Hawk Flyer is a 220-pound ultralight aircraft (no pilot license required) meant to soar only over water. Still, Kitty Hawk explicitly frames the company’s overall goal as building the future of personal aerial transportation.
But could it be that in this case, never is better than late? You can boil down the problems of flying cars to seven factors: safety, cost, noise, sky congestion, parking, regulation, and the overall question of why we even need them. I could think of no one better to address these concerns than Sebastian Thrun, the CEO of Kitty Hawk. Thrun is an AI scientist, a pioneer of self-driving cars, and an entrepreneur who also cofounded the online education firm Udacity. He cheerfully agreed to my proposal for an interview where I would act as the voice of brutal skepticism about the whole Jetson-esque enterprise, pitching him a series of cranky questions. Despite my best efforts, he remained upbeat and unflappable throughout. Whether he makes his case is up to you.
Steven Levy: Why do we need flying cars? Sebastian Thrun: It is a childhood dream. Flying is just such a magical thing to do. Making personalized flight available to everybody really opens up a set of new experiences. But in the long term there’s a practicality to the idea of a flying vehicle that takes off vertically like a helicopter, is very quiet, and can serve short range transportation. The ground is getting more and more congested. In the US, road usage increases by about three percent every year. But we don’t build any roads. And countries like China that very recently witnessed an explosion of automotive ownership are suffering tremendously from unbelievable traffic jams. While the ground infrastructure of roads is one-dimensional, the sky is three-dimensional, and it is much, much larger.
But it you build flying cars, won’t the air be just as congested? The nice thing about the air is there is more of it. You could have virtual highways in the sky and stack them vertically. So you never have a traffic intersection or similar.
But highways have lanes. You can’t have dotted lines in the sky.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Yes, you can, it turns out. Thanks to the US government we have the Global Positioning System that gives us precision location information. We can paint virtual highways into the sky. We are actually doing this today. When you look at the way planes fly, they use equipment that effectively constructs highways in the sky.
Still, the number of planes is tiny compared to cars, which you want to put in the air. Plus, everybody is buying drones. If you folks get your way, the sky is going to be completely full.
Every idea put to the extreme sounds odd. But, right now you look at the sky and you might see at most six things up there. Usually you see nothing. And you have to live in a congested area like San Francisco to see six things in the air. So we are a far distance from any visual obstruction of the air. To me, what’s much more important is noise. When a small aircraft flies over our house, it is very audible.
I’m glad you brought that up. These are noisy! The prototype that we showed is noisy, but we have a design under the wraps that will likely have a noise level similar to a passing car. The promise of electric technology is conceivably very quiet. Electric flight has the potential to be as quiet as a hummingbird.
Where do you park a flying car? The footprint of our prototype that we demonstrated is about the same as a small car.
Yeah, and try to find a parking space in San Francisco. Where are you going to park these things? We don’t know yet. Honestly, the vehicle we are building right now is meant as a motorsports vehicle. It’s operated only a few feet above the water line. Primarily for safety concerns. And it would be on your trailer or in your garage.
Right, at first you are building your flying cars to ride over water. I’m sure people who own quiet lake homes will be happy about that.
It will be as quiet or quieter than a jet ski. And it will be up to people to judge whether the noise levels are okay or not. This is not meant to bother other people; it is meant to empower people.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Small planes have a much higher accident rate than commercial airlines. It’s dangerous to fall from a height. Are you worried people are going to die from these things? I worry greatly about the safety. That is certainly a main concern. The reason why I think ultimately flight will actually be safer than ground is that on the ground there is a lot of stuff to hit. In the sky there is almost nothing to hit.
Sebastian Thrun Bloomberg / Getty Images But you are planning to have plenty of stuff up there to hit. All those flying cars! They can be coordinated to be out of the way. This happens today when you look at air traffic, which routes planes in different directions and at different altitudes. What is absolutely correct is that the equivalent of a fender-bender in the air is likely death. We know this. But if we look at the reasons why general aviation — small aircraft — aren’t safe, it is almost entirely because of pilot error. It is actually hard to land the plane at the proper air speed and in cross wind on a narrow runway. The nice thing we can harvest in Flyer is a computer system that relieves the pilot of these difficulties. Flyer will be as easy to pilot as any modern drone.
Drones crash all the time.
Drones crash a lot of the time because people make pilot errors. But when you sit in one, I don’t think you are going to fly yourself happily into a wall.
Maybe not happily.
The nice thing about this new type of flight is you can make the controls. We can design the computer to take away all the powers of flying that make flying hard and leave you the parts that makes flying easy. So on Flyer we have a joystick-based interface that lets you command flying. Very much like a 3D video game. If at any point in time you feel unsafe, you just take your hands off and you stay exactly where you are. There is no such aircraft today that can accomplish this.
Basically, isn’t this just a bunch of Silicon Valley billionaires inventing an indulgent form of transportation because they liked The Jetsons , and don’t want to be on public transit and highways like the rest of us? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Honestly, my objective is not to build a better way for me or my friends to get around. My objective is to really understand how safe and quiet and energy-economic air travel can change the way we move about. Transportation is so important to us. At this point most of us are confined to a very coarse set of arteries on the ground that are heavily congested. I believe if we invent safe, affordable, quiet technology that can, at some point in the distant future, be widely used throughout the nation, we could really alter the way transportation works. It would have a tremendous impact.
Still, won’t we peons be stuck on the ground while you and Larry Page and everyone who can afford it are zipping around over our heads? I am very convinced that at scale the prices for these vehicles will end up lower than the price of a car. And the reason is there isn’t much on these vehicles. In addition, there is the vision of having an air taxi service — it picks you up and then brings you to where you want to go. For people who live in a congested area, it could be a fundamental game changer. Having said this, this is a distant vision. Flyer is not intended to do this. Flyer is intended to be a very, very first vehicle that will let normal people without a pilot license experience safely the beauty of flight.
Is there any other cranky question I forgot ask? You could ask about regulators.
Good point. Won’t any level-headed regulator just nix this whole idea? We are working very actively with the FAA and other regulators, because at the core we share the same concern, which is safety. Especially as you innovate in something that has the potential to put bodily harm or even death to people. It is really important that this is done ethically and safely. As a result we see our friends from the FAA very, very frequently. And we’ve experienced really great collaboration. I am a technologist, so I can invent the technology, but it is the society that has to accept the technology. The more everyone can work together, the better for everyone involved.
Editor at Large X Topics Backchannel Flying Cars transportation Brandi Collins-Dexter Andy Greenberg Steven Levy Lauren Smiley Angela Watercutter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
771 | 2,017 | "Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality | WIRED" | "https://www.wired.com/story/future-of-artificial-intelligence-2018" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sandra Upson Backchannel Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality Laurent Hrybyk Save this story Save Save this story Save Application Deepfakes Text generation Source Data Images Text Video Technology Machine learning Machine vision Neural Network There’s a revolution afoot, and you will know it by the stripes.
Earlier this year, a group of Berkeley researchers released a pair of videos. In one, a horse trots behind a chain link fence. In the second video, the horse is suddenly sporting a zebra’s black-and-white pattern. The execution isn’t flawless, but the stripes fit the horse so neatly that it throws the equine family tree into chaos.
Turning a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality. Other tinkerers, for example, have used the zebrafication tool to turn shots of black bears into believable photos of pandas, apples into oranges, and cats into dogs. A Redditor used a different machine learning algorithm to edit porn videos to feature the faces of celebrities. At a new startup called Lyrebird , machine learning experts are synthesizing convincing audio from one-minute samples of a person’s voice. And the engineers developing Adobe’s artificial intelligence platform, called Sensei , are infusing machine learning into a variety of groundbreaking video, photo, and audio editing tools. These projects are wildly different in origin and intent, yet they have one thing in common: They are producing artificial scenes and sounds that look stunningly close to actual footage of the physical world. Unlike earlier experiments with AI-generated media, these look and sound real.
Sandra Upson is Backchannel's executive editor.
Sign up to get Backchannel's weekly newsletter, and follow us on Facebook , Twitter , and Instagram.
The technologies underlying this shift will soon push us into new creative realms, amplifying the capabilities of today’s artists and elevating amateurs to the level of seasoned pros. We will search for new definitions of creativity that extend the umbrella to the output of machines. But this boom will have a dark side, too. Some AI-generated content will be used to deceive, kicking off fears of an avalanche of algorithmic fake news. Old debates about whether an image was doctored will give way to new ones about the pedigree of all kinds of content, including text. You’ll find yourself wondering, if you haven’t yet: What role did humans play, if any, in the creation of that album/TV series/clickbait article? A world awash in AI-generated content is a classic case of a utopia that is also a dystopia. It’s messy, it’s beautiful, and it’s already here.
Currently there are two ways to produce audio or video that resembles the real world. The first is to use cameras and microphones to record a moment in time, such as the original Moon landing. The second is to leverage human talent, often at great expense, to commission a facsimile. So if the Moon descent had been a hoax, a skilled film team would have had to carefully stage Neil Armstrong’s lunar gambol. Machine learning algorithms now offer a third option, by letting anyone with a modicum of technical knowledge algorithmically remix existing content to generate new material.
At first, deep-learning-generated content wasn’t geared toward photorealism. Google’s Deep Dreams , released in 2015, was an early example of using deep learning to crank out psychedelic landscapes and many-eyed grotesques. In 2016, a popular photo editing app called Prisma used deep learning to power artistic photo filters, for example turning snapshots into an homage to Mondrian or Munch. The technique underlying Prisma is known as style transfer: take the style of one image (such as The Scream ) and apply it to a second shot.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Now the algorithms powering style transfer are gaining precision, signalling the end of the Uncanny Valley—the sense of unease that realistic computer-generated humans typically elicit. In contrast to the previous somewhat crude effects, tricks like zebrafication are starting to fill in the Valley’s lower basin. Consider the work from Kavita Bala’s lab at Cornell, where deep learning can infuse one photo’s style , such as a twinkly nighttime ambience, into a snapshot of a drab metropolis—and fool human reviewers into thinking the composite place is real. Inspired by the potential of artificial intelligence to discern aesthetic qualities, Bala cofounded a company called Grokstyle around this idea. Say you admired the throw pillows on a friend’s couch or a magazine spread caught your eye. Feed Grokstyle’s algorithm an image, and it will surface similar objects with that look.
“What I like about these technologies is they are democratizing design and style,” Bala says. “I’m a technologist—I appreciate beauty and style but can’t produce it worth a damn. So this work makes it available to me. And there’s a joy in making it available to others, so people can play with beauty. Just because we are not gifted on this certain axis doesn’t mean we have to live in a dreary land.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At Adobe, machine learning has been a part of the company’s creative products for well over a decade, but only recently has AI become transformative. In October engineers working on Sensei, the company’s set of AI technologies, showed off a prospective video editing tool called Adobe Cloak, which allows its user to seamlessly remove, say, a lamppost from a video clip—a task that would ordinarily be excruciating for an experienced human editor. Another experiment, called Project Puppetron, applies an artistic style to a video in real time. For example, it can take a live feed of a person and render him as a chatty bronze statue or a hand-drawn cartoon. “People can basically do a performance in front of a web cam or any camera and turn that into animation, in real time,” says Jon Brandt, senior principal scientist and director of Adobe Research. (Sensei’s experiments don’t always turn into commercial products.) Machine learning makes these projects possible because it can understand the parts of a face or the difference between foreground and background better than previous approaches in computer vision. Sensei’s tools let artists work with concepts, rather than the raw material. “Photoshop is great at manipulating pixels, but what people are trying to do is manipulate the content that is represented by the pixels,” Brandt explains.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That’s a good thing. When artists no longer waste their time wrangling individual dots on a screen, their productivity increases, and perhaps also their ingenuity, says Brandt. “I am excited about the possibility of new art forms emerging, which I expect will be coming.” But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text? More Predictions for 2018 Steven Levy Ricki Harris Rex Sorgatz A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.
With fake reviews costing around $10 to $50 each from micro-task marketplaces, Yao figured it was just a matter of time before a motivated engineer tried to automate the process, driving down the price and kicking off a plague of false reviews. (He also explored using neural nets to defend a platform against fake content, with some success.) “As far as we know there are not any such systems, yet,” Yao says. “But maybe in five or ten years, we will be surrounded by AI-generated stuff.” His next target? Generating convincing news articles.
…And A Few More: Scott Rosenberg Alexis Sobel Fitts Steven Levy Erin Griffith Progress on videos may move faster. Hany Farid, an expert at detecting fake photos and videos and a professor at Dartmouth, worries about how fast viral content spreads, and how slow the verification process is. Farid imagines a near future in which a convincing fake video of President Trump ordering the total nuclear annihilation of North Korea goes viral and incites panic, like a recast War of the Worlds for the AI era. “I try not to make hysterical predictions, but I don’t think this is far-fetched,” he says. “This is in the realm of what’s possible today.” Fake Trump speeches are already circulating on the internet, a product of Lyrebird, the voice synthesis startup—though in the audio clips the company has shared with the public, Trump keeps his finger off the button, limiting himself to praising Lyrebird. Jose Sotelo, the company’s cofounder and CEO, argues that the technology is inevitable, so he and his colleagues might as well be the ones to do it, with ethical guidelines in place. He believes that the best defense, for now, is raising awareness of what machine learning is capable of. “If you were to see a picture of me on the moon, you would think it’s probably some image editing software,” Sotelo says. “But if you hear convincing audio of your best friend saying bad things about you, you might get worried. It’s a really new technology and a really challenging problem.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Likely nothing can stop the coming wave of AI-generated content—if we even wanted to. At its worst, scammers and political operatives will deploy machine learning algorithms to generate untold volumes of misinformation. Because social networks selectively transmit the most attention-grabbing content, these systems’ output will evolve to be maximally likeable, clickable, and shareable.
But at its best, AI-generated content is likely to heal our social fabric in as many ways as it may rend it. Sotelo of Lyrebird dreams of how his company’s technology could restore speech to people who have lost their voice to diseases such as ALS or cancer. That horse-to-zebra video out of Berkeley? It was a side effect of work to improve how we train self-driving cars. Often, driving software is trained in virtual environments first, but a world like Grand Theft Auto only roughly resembles reality. The zebrafication algorithm was designed to shrink the distance between the virtual environment and the real world, ultimately making self-driving cars safer.
These are the two edges of the AI sword. As it improves, it mimics human actions more and more closely. Eventually, it has no choice but to become all too human: capable of good and evil in equal measure.
Features Editor X Topics Backchannel artificial intelligence machine learning Steven Levy Brandi Collins-Dexter Andy Greenberg Angela Watercutter Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
772 | 2,018 | "Fake Video Will Complicate Viral Justice | WIRED" | "https://www.wired.com/story/faked-video-could-end-justice-by-twitter-mob" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Catherine F. Brooks Security Faked Video Will Complicate Justice by Twitter Mob Alyssa Foote Save this story Save Save this story Save It used to be that cameras never lie.
We tend to privilege visual content , trust what we see, and rely on police cams, mobile recording tools and similar devices to tell us about what is really happening on the streets, in local businesses, and more.
Catherine Brooks (@catfbrooks) is an Associate Professor of Information at the University of Arizona, where she is the associate director of the School of Information and founding director of the Center for Digital Society and Data Studies. She is a Public Voices Fellow with the Op Ed Project.
Take, for example, a viral video that shows a white woman calling the police as black men in Oakland attempt to barbecue. Millions are laughing, and the woman’s image is being used as a meme across the Internet. When a video of a patron threatening café employees for not speaking English went viral, the subject, a New York attorney Aaron Schlossberg, was identified on social media within hours.
His office information was shared quickly, comments on review pages and public shaming ensued. The racist lawyer ended up with the attention of mariachis playing music outside of his apartment.
In both these cases, the videos were real, the memes entertaining, and the Twitter storm was deserved. After all, mobile videos and other cams provide transformative new avenues for justice, precisely because they can spread like fire around the world. But this kind of ‘justice’ landscape only works as long as we can trust the videos we see—and faked videos are on the horizon. Often called “deepfakes,” a term coined by a Reddit user for videos that swap porn star faces for those of famous people, fake videos are quickly becoming more prevalent. With a kind of Photoshop for video, artificial intelligence affords just about anyone the tools to generate fake visual content.
This kind of ‘justice’ only works as long as we can trust the videos we see—and faked videos are on the horizon.
Using a tool like FakeApp (an app that uses deep learning to make face-swap videos), pretty much anyone can gather images and make a video without a lot of computational skill. Very swiftly we have moved from the crude superimposing of faces in movies and video games, to sophisticated AI tools that give the average citizen means for doctoring visual content, and limited help in discerning this doctored material.
In a world of fake news, anyone can write a story that seems reliable; soon generating fake videos will become as commonplace. More and more, these videos will provide easy means for harassing individual citizens, influencing public officials, or threatening peers in schools. We can easily imagine a world of revenge porn, cyberbullying, and other kinds of public harassment of average citizens – maybe even children.
In a world of fake news, anyone can write a story that seems reliable; soon generating fake videos will become as commonplace Most consumers will be able to recognize the subtle cues of inauthenticity, only if they watch very carefully. But as we’ve learned from the rise of fake news, often people don’t consume information carefully. In a world where police cams, public surveillance videos, or even mobile recordings are used in highly-consequential scenarios, like court hearings, and when social media-based persuasion tactics are influencing elections around the globe, assuming people will ‘watch carefully’ is akin to assuming people will read online content critically. These technologies will become increasingly sophisticated over a very short period of time, making it more and more difficult for average consumers to be able to recognize deceptive tactics.
Reddit banned deepfakes.
But there will be other deepfakes. While consumers of information must be vigilant and remain critical when taking in public messages today, tech leaders must develop sophisticated but easy-to-use tools for average message consumers to be able to see doctored content.
Blockchain may work , but we’d better move quickly. The safety of ourselves and our democracy depends on it.
WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.
How San Quentin inmates built a search engine for prison The US again has the world’s most powerful supercomputer As rental cars fade away, Avis will try anything to survive PHOTO ESSAY: Inside the Arctic Circle, golden hour has nothing on golden day Meet the man at Apple who got apps talking to each other Get even more of our inside scoops with our weekly Backchannel newsletter Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics Video Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
773 | 2,016 | "US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’ - The Verge" | "https://www.theverge.com/2018/9/14/17859188/ai-deepfakes-national-security-threat-lawmakers-letter-intelligence-community" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’ US lawmakers say AI deepfakes ‘have the potential to disrupt every facet of our society’ / They’re asking the intelligence community to assess the threat from AI video manipulation By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story US politicians are getting increasingly worried about deepfakes — a new type of AI-assisted video editing that creates realistic results with minimal effort. Yesterday, a trio of lawmakers sent a letter to the Director of National Intelligence, Dan Coats, asking him to assess the threat posed to national security by this new form of fakery.
The letter says “hyper-realistic digital forgeries” showing “convincing depictions of individuals doing or saying things they never did” could be used for blackmail and misinformation. “As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security,” say the letter’s signatories, House representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL), and Carlos Curbelo (R-FL).
Deepfakes have the potential for blackmail, misinformation, and more The trio want the intelligence community to produce a report that includes descriptions of when “confirmed or suspected” deepfakes have been produced by foreign individuals (there are no current examples of this), and to suggest potential countermeasures.
In a press statement, Curbelo said: “Deep fakes have the potential to disrupt every facet of our society and trigger dangerous international and domestic consequences [...] As with any threat, our Intelligence Community must be prepared to combat deep fakes, be vigilant against them, and stand ready to protect our nation and the American people.” This isn’t the first time lawmakers have raised this issue. Earlier in the year, senators Mark Warner (D-VA) and Marco Rubio (R-FL) warned that deepfakes should be treated as a national security threat. In a speech, Rubio said the technology could supercharge misinformation campaigns led by foreign powers, singling out Russia as a particular threat.
“I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” said Rubio. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.” Deepfakes first came to prominence in 2016 when users on Reddit started using cutting-edge AI research to paste the faces of celebrities onto porn. The term itself doesn’t refer to any particular research, but is a portmanteau that combines “deep learning” with “fakes.” The phrase was first used by a Reddit user, but is slowly becoming synonymous with a wide-range of AI editing technology. Such tools can turn people into virtual puppets , syncing their mouths with someone else’s speech, or just making them dance like a pro.
A number of organizations, including university labs, startups, and even parts of the military, are examining ways to reliably detect deepfakes. These include methods like spotting irregular blinking patterns or unrealistic skin tone.
However, researchers agree that there’s no single method, and that whatever deepfake-spotting tool is created will soon be tricked by new versions of the technology. At any rate, even if there was an easy way to spot deepfakes, it wouldn’t necessarily stop the technology from being used maliciously. We know that from the spread of fake news on networks like Facebook. Even if it can be easily disproven, it can still convince those who want to believe.
Despite these challenges, getting the government involved is encouraging news. “This is a constructive step,” Stewart Baker, a former general counsel for the National Security Agency, told The Washington Post.
“It’s one thing for academics and techies to say that deepfakes are a problem, another for the intelligence community to say the same. It makes the concern something that Congress can address without fear of being second-guessed on how big the problem is.” Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech The latest AI copyright lawsuit involves Mike Huckabee and his books Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
774 | 2,023 | "All of these faces are fake celebrities spawned by AI - The Verge" | "https://www.theverge.com/2017/10/30/16569402/ai-generate-fake-faces-celebs-nvidia-gan" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence All of these faces are fake celebrities spawned by AI All of these faces are fake celebrities spawned by AI / New research from Nvidia uses artificial intelligence to generate high-res fake celebs By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story One of the more unexpected outcomes of the contemporary AI boom is just how good these systems are at generating fake imagery. The latest example comes from chipmaker Nvidia, which published a paper showing how AI can create photorealistic pictures of fake celebrities. Generating fake celebs isn’t in itself new, but researchers say these are the most convincing and detailed pictures of their type ever made.
The video below shows the process in full, starting with the database of celebrity images the system was trained on. The researchers used what’s known as a generative adversarial network , or GAN, to make the pictures. GANs are actually comprised of two separate networks: one that generates the imagery based on the data it’s fed, and a second discriminator network (the adversary) that checks if they’re real.
By working together, these two networks can produce some startlingly good fakes. And not just faces either — everyday objects and landscapes can also be created. The generator networks produces the images, the discriminator checks them, and then the generator improves its output accordingly. Essentially, the system is teaching itself.
There are limitations to this method of course. The pictures created are extremely small by the standards of modern cameras (just 1,024 by 1,024 pixels) and there are quite a few tell-tale signs they’re fake. For a start, they look like the celebrities the system was trained on (check out the Beyoncé lookalike early on) and there are glitchy parts in most images, like an ear that dribbles away into red mush.
As we’ve discussed in the past , this sort of technology could be put to all sorts of uses. There are obvious benefits for the creative industries, for making things like advertising and video games. But there’s also a threat in the form of disinformation. Sure, talented image editors have been able to create fake celeb photos for years using Photoshop, but AI tools will make this work quick and easy. (Adobe is already working on a number of AI-powered projects.
) And when we know the President of the USA can be fooled by re-used footage of a missile launch , it’s probably a good time to be worried about AI fakes.
Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech The latest AI copyright lawsuit involves Mike Huckabee and his books Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
775 | 2,008 | "Watch WIRED25: Google CEO Sundar Pichai on Doing Business in China, Working with the Military, and More | WIRED" | "https://www.wired.com/video/watch/google-ceo-sundar-pichai-at-wired25" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons WIRED25: Google CEO Sundar Pichai on Doing Business in China, Working with the Military, and More About Released on 10/15/2018 (downtempo electronic music) Hi, everyone.
We're gonna end on a high note.
We've got Sundar Pichai, who's the CEO of Google.
That's pretty good.
(audience laughing) Be careful what you wish for.
He became CEO in 2015, I learned.
I didn't have to Google, he was right here to answer that question for me.
I remember, though, my moment with you came.
I knew we'd met before in 2008 when we did a story for Wired Magazine.
We were embedded in the Chrome project, which Sundar headed.
He had this crazy idea that Google should do a browser.
It can compete with the dominant browsers on the Internet.
How did Chrome do? I think we've done well.
Like the most popular in the world, is that true? So that was pretty good.
(audience laughing) I was working on a book about Google back then.
It came out about seven years ago and I think in the last few years, a lot has happened to make me wonder what's different about Google than it was just a few years ago in the pre-mobile, pre-AI world there? Is Google different? Is the mission still the same, which is to organize the world's information and make it universally accessible? Or is it something broader or different now than it was from that founding mission that Larry and Sergey had? That's a good question.
I think we had a chance to reflect upon it.
We turned 20 years old just this past month.
It gives you a chance to step back and think about it.
I think there are many ways in which the company's still the same and some ways different.
I do think our mission feels timeless to me.
We are fortunate to have a mission which I think is still as applicable today.
Our values feel the same to me and as a company, if anything, with the Google-Alphabet transition I think we feel rejuvenated about working on the core problem of information.
I think we can do it better because of AI.
So that's been a core focus.
I still think it's possible in the company for individual engineers to work and create new things just like we did Gmail or News.
When we launched Google Duplex, it's a pretty profound technology.
It was started by an engineer who was frustrated at the time it took to call a restaurant.
So I think things like that happen.
It's definitely changed in the sense that we have many products now.
We're humbled and fortunate to serve billions of users, which with it, comes a sense of responsibility now.
I think we are much more deliberate about what we do and how we think about it and now when we think about impact, we don't think about users alone.
We think about users, groups of people, societies, institutions, non-profit, for-profit businesses, and so we take a more expansive look at it.
Also I think when we think about information problems, rather than just Search alone, we think about how can we use better information to help healthcare or education, so it's expansive that way.
So, many things the same but I think a lot of exciting differences as well.
And the structure is different.
You're the CEO of Google.
Google is part of a larger enterprise, a holding company called Alphabet where the founders of the company 20 years ago, Larry Page and Sergey Brin, work there.
I'm sort of curious about their involvement in there and to paraphrase or even quote Bloomberg Business Week, where's Larry? Larry's doing what he loves to do.
I think both Larry and Sergey.
They are at their best when they're not thinking about what other people are working on today but they're thinking further ahead.
They tend to think in 10-year time frames about what you can do.
I think, partly, the structure allows them to do that.
I think the way Alphabet is set up is we do have other efforts.
We call them Other bets.
Waymo, Calico, et cetera, are great examples of it verily.
They focused on that.
They spent time with the people who run those groups.
They do that pretty deeply and regularly.
They are there.
I meet them once a week too.
It's working as we intended it to.
Mm-hmm.
How often would you be talking to Larry? We still do our weekly meetings at Google.
We used to do it on Fridays but it's now on Thursdays.
We take questions from the company and we answer.
Google is a wonderful place.
There's a lot of debate about everything.
We can talk about that, yeah.
We do that every Thursday night.
The other big change, I always saw Google as an AI company, but in the past few years, like other companies too, AI has just become so much more central, particularly, machine learning, deep learning AI.
Tell me about that transition, what that means and how Google is trying to drive AI, not only within the company but into the mainstream of business and life in general? We are very excited by it.
I think we made a big bet on it as a company.
Google has always had this kind of academic deep computer science approach to things.
It's what we believe we are good at.
You're fortunate once in a while to come across something which you think is pretty profound.
I'm sure as all of you understand, AI is that.
As a company, we are working hard.
I think it's a cross-cutting thing, which will impact many fields.
We take that view.
For us, seeing the work we can do on healthcare or even education, et cetera, I think AI feels very profound.
We are putting a lot of effort into it.
I do think we have some of the best in class teams across Google and DeepMind and so on.
Pretty excited at the progress.
Just last Friday, we just published a paper on directing breast cancer and the fact that AI working with pathologists together outperformed either pathologists doing it alone or AI doing it alone.
It's an exciting finding and things like that really motivate us to do more.
Right, well certainly at Wired, we celebrate AI but we also follow pretty closely to the discussions about ethics in AI, which I know you folks are concerned in.
What principles do you operate on to make sure that the AI you develop and maybe in health and other places in AI in general, moves along ethically in a way that's not gonna harm people or take us over and kill us? This is why we stepped back as a company.
Once we started working on AI, we realized this is different from other things we have worked on.
As a company, I think we publicly published and committed ourselves to a set of AI principles.
I think we've gone more comprehensive than most other companies, than anyone else.
We've kind of articulated our goals in terms of how we would do it, how we would approach our work and things we would not pursue too.
I think we are spending a lot of time thinking about it.
I think it is something which we will need to evolve over time but we need to take it very, very seriously.
Do you think that Google and the industry in general can self-regulate those ethics? Or at some point, would you welcome some outside body, or even the government, to make sure that AI proceeds ethically? I think the scale at which technology is impacting society, you are going to see regulation.
I think it's important there is powerful regulation on these things.
We think that's the right thing.
We do need to self-regulate quite a bit because sometimes regulation follows rather than keeping pace with it.
It would draw a parallel to geneticists and biologists.
When they are progressing with technologies like CRISPR, they draw boundaries on what they would do and they would not do well before regulations come into play.
We see our work the same way when we're working on AI.
I think it's good to do both.
Earlier on the stage, we had Jeff Bezos.
I asked him about the work that they're doing in defense contracts, both in the sky with Blue Origin and on the ground in AWS, there's a Project JEDI.
Google, after a lot of internal discussion, we could talk about how that discussion goes on.
Google has decided, tell me if this is correct, that it's not going to bid on this defense department cloud-computing Project JEDI.
You had a contract with the Department of Defense with a thing called Project Maven.
It's to use AI in the military and you're not gonna renew that there.
Talk about that.
What led to that? Maybe you can tell me whether you call it a reversal or not.
What led to Google's decision really to, it looks like back away from using AI in defense work? Maybe a few clarifications.
We do do work with the military.
Obviously, we deeply respect what they do to protect our country.
As Google, we have values.
As society, we cherish our values but we can enjoy that because of the, our country is defended.
I do wanna say we deeply respect the military and we are working with them on a set of projects.
We are going to continue to do on a set of projects, which we are qualified to do so, in areas like cybersecurity or even logistics, transportation, planning, et cetera.
The only area where we are being more deliberate about is where AI gets used with autonomous weaponry, AI and weaponry.
That's why I gave the, it's not just the consent of employees.
If you talk to senior researchers working in the field in the AI community, there's just worries about when you're so early with a powerful technology, how do you thoughtfully work your way through it? That's why I gave the biology example.
I think the parallel holds true here that we are thoughtful about it.
I think we are committed to, with the JEDI contract, it is more of a wholesale contract.
We don't have all the certifications.
But I think over time, there will be opportunities for us to work with the military on many things.
You can also imagine, as a consumer company, on many aspects, we are not the best qualified company to do it in certain projects, but we are definitely going to be thoughtful about how we think about it.
How much was the voice of your employees a factor in this? Throughout Google's history, we have given our employees a lot of voice and say in it, but we don't run the company by holding referendums.
It's an important input, we take it seriously.
Even on this particular issue, it's not just what the employees said.
It's more also the debate within the AI community around how you pursue your work in this area.
Another interesting area.
You mentioned the Duplex project there.
That was a fascinating thing.
Maybe I'll let you describe a little what it is rather than my characterization of it.
What's Duplex? It's where using AI, at least in a narrow domain, we are able to act on behalf of people.
So if you wanna call a restaurant and book a reservation, we call, we tell this is an automated service from Google calling and we'd like to book an appointment for you and do that.
It's a good area where we've been very deliberate about it.
We have the capability to roll it out much faster than what we have done.
We wanna test it and make sure people are okay with it.
People give us the right feedback and we are reading through it.
The thing that was fascinating about it and what a lot of people noticed was when you conversed with this thing, it was so good in part because you put in conversational pauses and made people feel that were talking with a human being, which on one hand, could give them a comfort level of being able to ask freely what they want.
Sometimes, people are stilted when they talk to something that sounds automated.
On the other hand, some people hide concerns.
People might be tricked, maybe not so much by Duplex, but down the road, we wouldn't know whether we're talking to a bot or not.
With every technology, we are actually doing deep AI work to detect when something is AI on the other side.
It's no different from spam or anything else you work on.
I think part of the reason it's important to work on technology is technology ends up progressing whether we want it to or not.
I feel on every important technology, it's important that you work aggressively to make sure the outcome is good.
That's the way we think a lot about our work.
Let's talk about another probably favorite subject, China.
(audience laughing) We've been hearing that Google has been working on a project called Dragonfly, which is a search engine which would be able to work with the Chinese rules of censorship.
I spent a lot of time on my book writing about Google's experience in China previously.
It had a search engine that worked, or attempted to work within those boundaries.
Super controversial inside the company.
Eventually, the company pulled out.
Consider that again, why go back in? And what's the status of Dragonfly? Steven, you wrote a lot about it for those of you who are unfamiliar.
In 2006, Google went into China.
We served Search.
In 2010, we stopped serving Search in China, but we didn't exit the country.
We have engineers and over the past few years, we have hired more people.
Android is obviously a very popular operating system there.
We support small and medium businesses there in terms of them exporting their products, et cetera.
We are in the market.
It's been eight years.
Every time we are in a country, our mission is to provide information to everyone.
It's 20% of the world's population, so it does weigh heavily on us.
Any time we work in countries across the world, it's probably people don't understand it fully, but we are always balancing a set of values.
We are providing users access to information, freedom of expression, user privacy, but we also follow the rule of law in every country we do.
Obviously, when it comes to China, given our history, it's a more weighty topic.
Our intent was the reason we did the internal project was to complete it, it's been many years.
We've been out of the market.
It's a wonderful, innovative market.
We wanted to learn what it would look like if Google were in China.
That's what we've built internally.
If Google were to operate in China, what would it look like? What queries will we be able to serve? It turns out we'll be able to serve well over 99% of the queries.
There are many, many areas where we would provide information better than what's available.
When people type cancer treatments, today, people either get fake cancer treatments or they actually get useful information.
Things like that weigh heavily on us but we wanna balance it with what the conditions would be.
It's very early.
We don't know whether we would or could do this in China, but we felt it was important for us to explore.
I take a long-term view on this.
I think it's important for us, given how important the market is and how many users there are.
We feel obliged to think hard about this problem and take a long-term view.
You've been at Google since what, 2004? Yeah. Is that right? I remember that.
I guess the theme of the day is when Wired started out, we were just brimming with optimism about what the Internet was gonna do and how freedom of expression would go forth.
The late John Perry Barlow, who is a big part of our community, expounded on this.
We've seen, you've just talked about it, the various things, considerations in China, how it's a more tough world to spread that kind of freedom that the Internet seemed to provide.
Google's right in the middle of it.
We heard some of that from Susan's session, fighting fake news and misinformation.
Tell me where you stand on this.
Are you as optimistic as you were in 2004? I still, There are many, many times we run into.
It's part of the work.
We run into people getting access to information for the first time, either buying their first phone.
You see the impact it has.
I grew up without access to computing.
I got it much later in my life.
For sure, it impacted my life very, very positively and profoundly.
I still carry that optimism with me every day.
We do realize technology is working at scale and with that comes different lens with which you look at.
I think it's important to be deliberate, thoughtful, have larger societal goals as we make progress through it.
But there's no doubt to me, just looking at the work and the results we are getting by using technology in healthcare alone, there is no doubt to me there's a lot of positive impact ahead.
But we need to learn from what's happened and pick it up and continue working on it.
Just one final thing there I did wanna mention.
Who you nominated for your next 25, can you just briefly tell us who that was? It's a wonderful organization.
It's from my hometown in India.
It's called Aravind Eye Foundation.
They see about 2,000 patients a day.
They try and cure eye diseases.
They do mostly their work for free.
And Dr. Kim.
The ethics and the values with which these people approach their work.
We started working with them because we developed an AI model which can detect an early onset of blindness.
If you detect it early, it's completely curable, but most people don't get detected early.
We are now testing it with them so that they can use that to detect it in more people.
It's a good example of how you think about technology.
Dr. Kim would say people ask questions about, Hey, with the AI, are you worried it'll impact your job? He thinks about it as It'll give me a chance to treat more than 2,000 patients a day and I wanna be able to do that.
It's a pleasure to be able to honor Dr. Kim.
On that technology optimistic note, I thank you.
Alright, thank you.
(applause) Everything From the 2017 Google Pixel Event I Traded My Phone for a Bunch of 25 Year Old Tech for a Day WIRED25: CEO Susan Wojcicki On Making YouTube A Better Place First Look: Google's Pixel 3 and Pixel 3 XL Hands On WIRED25: Marc Benioff Talks About Taxing the Rich, Tackling Homelessness, and Philanthropy Here's Everything New From Google Google's Plan to Use Ads to Sway ISIS Recruits | WIRED BizCon Andreessen Horowitz's Ben Horowitz in Conversation with Steven Levy WIRED's Top Gadget Stories of 2016 WIRED25: Move Fast, Fix Things Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
776 | 2,018 | "Google's Past Data Use Could Impede Its Health Care Push | WIRED" | "https://www.wired.com/story/googles-past-data-use-could-impede-healthcare-push" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Google's Past Data Use Could Impede Health Care Push Tim Pannell/Corbis/Getty Images Save this story Save Save this story Save Alphabet’s London-based AI lab DeepMind made history in 2016 when its AlphaGo software defeated a champion at the complex board game Go. On Tuesday, the company said it was handing off a seemingly much simpler software challenge: a health care app for hospital staff called Streams being tested by UK hospitals.
That project and its staff will be transferred to DeepMind’s much larger sister, Google.
The announcement prompted an outcry from privacy researchers, which, along with legal constraints on the move, illustrating the challenges Google faces expanding its data-hungry operating style into the more sensitive business of health care. Last week, Google hired health industry veteran David Feinberg, who previously led the Pennsylvania health system Geisinger, to unify its scattered projects in the field.
Google didn’t respond to a request for comment on its plans. A DeepMind spokesperson said transferring Streams to Google would not change the project’s strict controls on use of data, which remains under the control of its partner hospitals.
In 2014, Google acquired DeepMind for a reported $650 million. The following year DeepMind became part of the new holding company Alphabet, and started working with north London’s Royal Free hospital on a project to reduce deaths from acute kidney injury. a form of sudden kidney failure that can be fatal. The project coalesced around an app called Streams that can alert staff when patients show early signs of the condition—and quickly earned regulatory scrutiny.
UK magazine New Scientist revealed that the project’s data-sharing agreement gave DeepMind access to five years of expansive health records for 1.6 million people. Some of the data seemed unnecessary for Streams to function; it included details such as whether a person was HIV positive, had suffered depression, or had had an abortion. In 2017, UK data regulator the Information Commissioner’s Office said Royal Free had breached the law by allowing DeepMind to use the data without appropriate patient consent, and providing a broader swath of data than justified. The hospital was required to audit its project but wasn’t fined, and DeepMind was not cited.
DeepMind has tried to deflect criticism about its data use, promising that “data will never be connected to Google accounts or services, or used for any commercial purposes like advertising or insurance.” Tuesday’s announcement that it is transferring the whole project to Google sparked renewed concerns among some privacy researchers. “The big story here is Google wants to have all the health data it can,” says Eerke Boiten, a professor of cybersecurity at De Montfort University in the UK. “Its promises have not proved to be reliable.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg X content This content can also be viewed on the site it originates from.
History suggests cause for concern. When Google acquired online ad network DoubleClick in 2008, it played down the idea it would merge the two company’s data sets, and kept them separate for almost a decade. In 2017, at a time it was losing market share to Facebook, Google merged the data troves after all.
Rahael Maladwala, a health care analyst at research firm GlobalData, says Google has clear ambitions in health care. In 2011, the company abandoned its first major health project, a records service called Google Health, after weak interest from patients or providers, but it is showing renewed interest. Google lately has spun up projects such as testing AI software to diagnose eye disease in India, and launching health-tracking mobile software to compete with Apple’s HealthKit for iPhones. Google’s top AI boss said last week that new hire Feinberg would help organize the company’s various projects, and coordinate more with fellow Alphabet company Verily, which works on life sciences R&D.
X content This content can also be viewed on the site it originates from.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google isn’t the only data-centric tech company with growing health care ambitions. Amazon is building a health care delivery company with Berkshire Hathaway and JP Morgan. Maladwala says the slow digitization of health care has reached a point where tech industry data smarts can significantly improve diagnoses and efficiency. “We’re going to see a lot more technology companies moving into health care,” he adds.
Legal complications surrounding Google’s planned absorption of Streams show how tech companies can’t roll out their usual data-centric strategies unimpeded. Google is less free to do as it wants with data from the clinic than it is with records of online activity. In the US and Europe, health data is subject to special protections that make moves, like the one DeepMind announced Tuesday, more difficult. Under UK data protection law, DeepMind is not the “controller” of the clinical data crunched by Streams; its partners are. That means Google doesn’t own the data or get to choose how it is processed and used. Similarly in the US, the federal HIPAA law prevents organizations working with health data from arbitrarily adapting it to new purposes.
Worse for Google executives who want to move quickly, the company can’t immediately assume DeepMind’s contracts with hospitals. Those institutions need to give consent, potentially giving them a chance to negotiate different terms. “Nothing changes until [the partners] consent and undertake any necessary engagement, including with patients,” says Dominic King, a former NHS surgeon who now works at DeepMind and will lead Streams at Google.
Not all those partners have signed off. Asked if their institutions would consent to new contracts, a spokesperson for Royal Free said it was “committed” to developing the Streams app; Taunton and Somerset, a hospital in southwest England also using the app, said it was "in discussion" with DeepMind about the project’s change of ownership.
Despite the power of regulators and health care organizations to shape Google’s health care plans, Liz McFall, a researcher at the University of Edinburgh who has followed DeepMind’s efforts, says they may not exercise real oversight. Aging, sickening populations in the US and UK drive health systems to work with tech companies to reduce costs—but also leave them ill-equipped to monitor data use, she adds.
Medical and data authorities also seem out of their depth, according to McFall: “Existing regulation and ethics standards weren’t written for a digital health world.” Intricate maps reveal what public transit gets wrong A stupid simple wonderful way to make Google Docs My dad says he’s a “ targeted individual.
” Maybe we all are PHOTOS: A Blade Runner -esque vision of Tokyo Jeff Bezos wants us all to leave Earth— for good Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Senior Editor X Topics Google DeepMind healthcare privacy Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
777 | 2,018 | "Google Cofounder Sergey Brin Warns of AI's Dark Side | WIRED" | "https://www.wired.com/story/google-cofounder-sergey-brin-warns-of-ais-dark-side" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Google Cofounder Sergey Brin Warns of AI's Dark Side Google cofounder Sergey Brin says advances in artificial intelligence bring new questions and responsibilities.
Kimberly White/Getty Images for Breakthrough Prize Save this story Save Save this story Save Application Ethics Company Alphabet Google Technology Machine learning Neural Network Artificial intelligence is a recurring theme in recent remarks by top executives at Alphabet. The company’s latest Founders’ Letter , penned by Sergey Brin, is no exception—but he also finds time to namecheck possible downsides around safety, jobs, and fairness.
The company has issued a Founders’ Letter---usually penned by Brin, cofounder Larry Page or both---every year, beginning with the letter that accompanied Google’s 2004 IPO. Machine learning and artificial intelligence have been mentioned before. But this year Brin expounds at length on a recent boom in development in AI that he describes as a “renaissance.” “The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.
When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages , adding captions to YouTube videos, diagnosing eye disease , and even creating better neural networks.
Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google’s first servers. In a flash of math humor, he says that Google’s quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.
As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes.
AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.
All that might sound like a lot for Google and the tech industry to contemplate while also working at full speed to squeeze profits from new AI technology. Even some Google employees aren’t sure the company is on the right track—thousands signed a letter protesting the company’s contract with the Pentagon to apply machine learning to video from drones.
Brin doesn’t mention that challenge, and wraps up his discussion of AI’s downsides on a soothing note. His letter points to the company’s membership in industry group Partnership on AI, and Alphabet’s research in areas such as how to make learning software that doesn’t cheat ), and AI software whose decisions are more easily understood by humans.
“I expect machine learning technology to continue to evolve rapidly and for Alphabet to continue to be a leader — in both the technological and ethical evolution of the field,” Brin writes.
Why artificial intelligence researchers should be more paranoid.
How Amazon rebuilt itself around artificial intelligence.
Read the WIRED Guide to Artificial Intelligence.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Senior Editor X Topics Google Alphabet Sergey Brin artificial intelligence Will Knight Khari Johnson Will Knight Will Knight Will Knight Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
778 | 2,016 | "In Two Moves, AlphaGo and Lee Sedol Redefined the Future | WIRED" | "https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business In Two Moves, AlphaGo and Lee Sedol Redefined the Future Geordie Wood for WIRED Save this story Save Save this story Save SEOUL, SOUTH KOREA --- In Game Two, the Google machine made a move that no human ever would. And it was beautiful. As the world looked on, the move so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence.
But in Game Four, the human made a move that no machine would ever expect. And it was beautiful too. Indeed, it was just as beautiful as the move from the Google machine---no less and no more. It showed that although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own transcendent moments. And it seems that in the years to come, as we humans work with these machines, our genius will only grow in tandem with our creations.
Although machines are now capable of moments of genius, humans have hardly lost the ability to generate their own.
This week saw the end of the historic match between Lee Sedol, one of the world's best Go players, and AlphaGo, an artificially intelligent system designed by a team of researchers at DeepMind, a London AI lab now owned by Google. The machine claimed victory in the best-of-five series, winning four games and losing only one. It marked the first time a machine had beaten the very best at this ancient and enormously complex game---a feat that, until recently, experts didn't expect would happen for another ten years.
The victory is notable because the technologies at the heart of AlphaGo are the future. They're already changing Google and Facebook and Microsoft and Twitter, and they're poised to reinvent everything from robotics to scientific research.
This is scary for some. The worry is that artificially intelligent machines will take our jobs and maybe even break free from our control---and on some level, those worries are healthy. We won't be caught by surprise.
But there's another way to think about all this---a way that gets us beyond the trope of human versus machine, guided by the lessons of those two glorious moves.
With the 37th move in the match's second game , AlphaGo landed a surprise on the right-hand side of the 19-by-19 board that flummoxed even the world's best Go players, including Lee Sedol. "That's a very strange move," said one commentator, himself a nine dan Go player, the highest rank there is. "I thought it was a mistake," said the other. Lee Sedol, after leaving the match room, took nearly fifteen minutes to formulate a response. Fan Gui---the three-time European Go champion who played AlphaGo during a closed-door match in October, losing five games to none ---reacted with incredulity. But then, drawing on his experience with AlphaGo---he has played the machine time and again in the five months since October---Fan Hui saw the beauty in this rather unusual move.
Go Grandmaster Lee Sedol Grabs Consolation Win Against Google’s AI The Sadness and Beauty of Watching Google’s AI Play Go Google’s AI Wins Pivotal Second Game in Match With Go Grandmaster Indeed, the move turned the course of the game. AlphaGo went on to win Game Two, and at the post-game press conference, Lee Sedol was in shock. "Yesterday, I was surprised," he said through an interpreter, referring to his loss in Game One.
"But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading." It was a heartbreaking moment. But at the same time, those of us who watched the match inside Seoul's Four Seasons hotel could feel the beauty of that one move, especially after talking to the infectiously philosophical Fan Hui. "So beautiful," he kept saying. "So beautiful." Then, the following morning, David Silver, the lead researcher on the AlphaGo project, told me how the machine had viewed the move. And that was beautiful too.
Originally, Silver and his team taught AlphaGo to play the ancient game using a deep neural network---a network of hardware and software that mimics the web of neurons in the human brain. This technology already underpins online services inside places like Google and Facebook and Twitter, helping to identify faces in photos, recognize commands spoken into smartphones, drive search engines, and more. If you feed enough photos of a lobster into a neural network, it can learn to recognize a lobster. If you feed it enough human dialogue, it can learn to carry on a halfway decent conversation.
And if you feed it 30 million moves from expert players, it can learn to play Go.
But then the team went further. Using a second AI technology called reinforcement learning, they set up countless matches in which (slightly) different versions of AlphaGo played each other. And as AlphaGo played itself, the system tracked which moves brought the most territory on the board. "AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," Silver said when Google unveiled AlphaGo early this year.
Culture Taylor Swift and Beyoncé Are Resurrecting the American Movie Theater Angela Watercutter Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Business Sweden’s Tesla Blockade Is Spreading Morgan Meaker Geordie Wood for WIRED Then the team took yet another step. They collected moves from these machine-versus-machine matches and fed them into a second neural network. This neural net trained the system to examine the potential results of each move, to look ahead into the future of the game.
So AlphaGo learns from human moves, and then it learns from moves made when it plays itself. It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game. This is what happened with Move 37. As Silver told me, AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times---and looked ahead in the future of the game---it decided to make the move anyway. And the move was genius.
Lee Sedol then lost Game Three , and AlphaGo claimed the million-dollar prize in the best-of-five series. The mood inside the Four Seasons dipped yet again. "I don't know what to say today, but I think I will have to express my apologies first," Lee Sedol said. "I should have shown a better result, a better outcome, a better contest in terms of the games played." It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game.
In Game Four, he was intent on regaining some pride for himself and the tens of millions who watched the match across the globe. But midway through the game, the Korean's prospects didn't look good. "Lee Sedol needs to do something special," said one commentator. "Otherwise, it’s just not going to be enough." But after considering his next move for a good 30 minutes, he delivered something special. It was Move 78, a "wedge" play in the middle of the board, and it immediately turned the game around.
As we found out after the game, AlphaGo made a disastrous play with its very next move, and just minutes later, after analyzing the board position, the machine determined that its chances of winning had suddenly fallen off a cliff. Commentator and nine dan Go player Michael Redmond called Lee Sedol's move brilliant: "It took me by surprise. I'm sure that it would take most opponents by surprise. I think it took AlphaGo by surprise." Among Go players, the move was dubbed "God's Touch." It was high praise indeed. But then the higher praise came from AlphaGo.
Culture Taylor Swift and Beyoncé Are Resurrecting the American Movie Theater Angela Watercutter Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Business Sweden’s Tesla Blockade Is Spreading Morgan Meaker Korean news anchorwoman reporting from the match.
Geordie Wood for WIRED The next morning, as he walked down the main boulevard in Sejong Daero just down the street from the Four Seasons, I discussed the move with Demis Hassabis, who oversees the DeepMind Lab and was very much the face of AlphaGo during the seven-day match. As we walked, the passers-by treated him like a celebrity---and indeed he was, after appearing in countless newspapers and on so many TV news shows. Here in Korea, where more than 8 million people play the game of Go, Lee Sedol is a national figure.
Hassabis told me that AlphaGo was unprepared for Lee Sedol's Move 78 because it didn't think that a human would ever play it. Drawing on its months and months of training, it decided there was a one-in-ten-thousand chance of that happening. In the other words: exactly the same tiny chance that a human would have played AlphaGo's Move 37 in Game Two.
The symmetry of these two moves is more beautiful than anything else.
One-in-ten-thousand and one-in-ten-thousand.
This is what we should all take away from these astounding seven days. Hassabis and Silver and their fellow researchers have built a machine capable of something super-human. But at the same time, it's flawed. It can't do everything we humans can do. In fact, it can't even come close. It can't carry on a conversation. It can't play charades.
It can't pass an eighth grade science test.
It can't account for God's Touch.
But think about what happens when you put these two things together. Human and machine. Fan Hui will tell you that after five months of playing match after match with AlphaGo, he sees the game completely differently. His world ranking has skyrocketed. And apparently, Lee Sedol feels the same way. Hassabis says that he and the Korean met after Game Four, and that Lee Sedol echoed the words of Fan Hui. Just these few matches with AlphaGo, the Korean told Hassabis, have opened his eyes.
This isn't human versus machine. It's human and machine. Move 37 was beyond what any of us could fathom. But then came Move 78. And we have to ask: If Lee Sedol hadn't played those first three games against AlphaGo, would he have found God's Touch? The machine that defeated him had also helped him find the way.
Senior Writer X Topics AlphaGo artificial intelligence deep learning DeepMind Enterprise Google WIRED Classic Gregory Barber Caitlin Harrington Steven Levy Jacopo Prisco Will Knight Nelson C.J.
Peter Guest Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
779 | 2,016 | "OpenAI Gym Beta" | "https://openai.com/blog/openai-gym-beta" | "Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Research OpenAI Gym Beta April 27, 2016 More resources Read paper Environments , Games , Reinforcement learning , Robotics , Software engineering , Open source , Release , Publication OpenAI Gym is compatible with algorithms written in any framework, such as Tensorflow and Theano.
The environments are written in Python, but we’ll soon make them easy to use from any language. We originally built OpenAI Gym as a tool to accelerate our own RL research. We hope it will be just as useful for the broader community.
Getting started If you’d like to dive in right away, you can work through our tutorial.
You can also help out while learning by reproducing a result.
Why RL? Reinforcement learning (RL) is the subfield of machine learning concerned with decision making and motor control. It studies how an agent can learn how to achieve goals in a complex, uncertain environment. It’s exciting for two reasons: RL is very general, encompassing all problems that involve making a sequence of decisions : for example, controlling a robot’s motors so that it’s able to run and jump , making business decisions like pricing and inventory management, or playing video games and board games.
RL can even be applied to supervised learning problems with sequential or structured outputs.
RL algorithms have started to achieve good results in many difficult environments.
RL has a long history, but until recent advances in deep learning, it required lots of problem-specific engineering. DeepMind’s Atari results , BRETT from Pieter Abbeel’s group, and AlphaGo all used deep RL algorithms which did not make too many assumptions about their environment, and thus can be applied in other settings.
However, RL research is also slowed down by two factors: The need for better benchmarks.
In supervised learning, progress has been driven by large labeled datasets like ImageNet.
In RL, the closest equivalent would be a large and diverse collection of environments. However, the existing open-source collections of RL environments don’t have enough variety, and they are often difficult to even set up and use.
Lack of standardization of environments used in publications.
Subtle differences in the problem definition, such as the reward function or the set of actions, can drastically alter a task’s difficulty. This issue makes it difficult to reproduce published research and compare results from different papers.
OpenAI Gym is an attempt to fix both problems.
The environments OpenAI Gym provides a diverse suite of environments that range from easy to difficult and involve many different kinds of data. We’re starting out with the following collections: Classic control and toy text : complete small-scale tasks, mostly from the RL literature. They’re here to get you started.
Algorithmic : perform computations such as adding multi-digit numbers and reversing sequences. One might object that these tasks are easy for a computer. The challenge is to learn these algorithms purely from examples. These tasks have the nice property that it’s easy to vary the difficulty by varying the sequence length.
Atari : play classic Atari games. We’ve integrated the Arcade Learning Environment (which has had a big impact on reinforcement learning research) in an easy-to-install form.
Board games : play Go on 9x9 and 19x19 boards. Two-player games are fundamentally different than the other settings we’ve included, because there is an adversary playing against you. In our initial release, there is a fixed opponent provided by Pachi , and we may add other opponents later (patches welcome!). We’ll also likely expand OpenAI Gym to have first-class support for multi-player games.
2D and 3D robots : control a robot in simulation. These tasks use the MuJoCo physics engine, which was designed for fast and accurate robot simulation. Included are some environments from a recent benchmark by UC Berkeley researchers (who incidentally will be joining us this summer). MuJoCo is proprietary software, but offers free trial licenses.
Over time, we plan to greatly expand this collection of environments. Contributions from the community are more than welcome.
Each environment has a version number (such as Hopper-v0 ). If we need to change an environment, we’ll bump the version number, defining an entirely new task. This ensures that results on a particular environment are always comparable.
Evaluations We’ve made it easy to upload results to OpenAI Gym. However, we’ve opted not to create traditional leaderboards. What matters for research isn’t your score (it’s possible to overfit or hand-craft solutions to particular tasks), but instead the generality of your technique.
We’re starting out by maintaining a curated list of contributions that say something interesting about algorithmic capabilities. Long-term, we want this curation to be a community effort rather than something owned by us. We’ll necessarily have to figure out the details over time, and we’d would love your help in doing so.
We want OpenAI Gym to be a community effort from the beginning. We’ve starting working with partners to put together resources around OpenAI Gym: NVIDIA : technical Q&A with John.
Nervana : implementation of a DQN OpenAI Gym agent.
Amazon Web Services (AWS) : $250 credit vouchers for select OpenAI Gym users. If you have an evaluation demonstrating the promise of your algorithm and are resource-constrained from scaling it up, ping us for a voucher. (While supplies last!) During the public beta, we’re looking for feedback on how to make this into an even better tool for research. If you’d like to help, you can try your hand at improving the state-of-the-art on each environment, reproducing other people’s results, or even implementing your own environments. Also please join us in the community chat ! Authors Authors Greg Brockman Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
" |
780 | 2,012 | "8 Visionaries on How They Spot the Future | WIRED" | "https://www.wired.com/epicenter/2012/04/ff_spotfuture_qas" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Joanna Pearlstein 8 Visionaries on How They Spot the Future Save this story Save Save this story Save Spotting the future is an art. We asked eight of our favorite visionaries for their techniques.
A longtime technology forecaster, Saffo is a managing director at the Silicon Valley investment research firm Discern. Formerly the director of the Institute for the Future, he is also a consulting professor in Stanford University's engineering department.
There are four indicators I look for: contradictions, inversions, oddities, and coincidences. In 2007 stock prices and gold prices were both soaring. Usually you don't see those prices high at the same time. When you see a contradiction like that, it means more fundamental change is ahead.
The second indicator is an inversion, where you see something that's out of place. When the Mexican police captured the head of a drug cartel, in the photos the perpetrators were looking proudly at the camera while the cops were wearing ski masks. Usually it's the reverse. To me that was an indicator that Mexico was very far from winning its war against the cartels.
Then there are oddities. When the Roomba robot vacuum was introduced in 2002, all the engineers I know were very excited, and I don't recall them owning vacuums. I said, this is damn strange. This is not about cleaning floors, this is about scratching some kind of itch. It's about something happening with robots.
Finally, there are coincidences. At the fourth Darpa Grand Challenge in 2007, a bunch of robots successfully drove in a simulated suburb. The same day, there was a 118-car pileup on a California highway. We had robots that understand the California vehicle code better than humans, and a bunch of humans crashing into each other. That said to me, really, people shouldn't drive.
Illustration: Andrew Zbihlyj; Brant Ward/Corbis Founder of the influential Release 1.0 newsletter and PC Forum conference director, Dyson is an angel investor in technology, health care, and space travel companies. She sits on the boards of 23andMe, the Long Now Foundation, the Santa Fe Institute, and Evernote, among others.
The first thing I do is go where other people aren't. I leave Silicon Valley and spend a lot of time not just in New York but in Russia and in other far-off places. Any time you approach something as an outsider, you're able to see what people who are familiar with it can't. I love traveling because I love seeing how many different ways there are to do things.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The other thing is to be curious. My parents are both scientists, so I learned to ask "Why, why, why?" Mostly I look at what I'm interested in, and that doesn't necessarily mean it's what the world will find interesting. I can be self-indulgent.
Illustration: Andrew Zbihlyj; Nadine Rupp/Getty Enriquez is managing director at Excel Medical Ventures and chair and CEO of Biotechonomy, a Boston investment firm. He's the author of The Untied States of America and As the Future Catches You.
A clear view of the future is often obstructed by taking too much for granted. Like: "We are the human species." Really? It turns out that when you consider Cro-Magnon, Australopithecus, etc., we've had 29 upgrades. So unless you believe that the purpose of all of this evolution was to create Rush Limbaugh and Howard Stern and then flatline, you have to ask: Is it possible to have another upgrade? Or what about "In 50 years, the US flag will still have 50 stars"? So why would you assume continuity for the next 50 years? It's when we question our most cherished assumptions that it gets really interesting to play with this stuff.
Illustration: Andrew Zbihlyj; Simon Russell/Getty Founder of the eponymous tech book publisher, O'Reilly launched several influential gatherings of the technorati, including Web 2.0, Foo Camp, and Maker Faire.
I don't really think I spot the future; I spot the things in the present that tell us something about the future. I look for interesting people. I find the cool kids and then say, what are they doing? The myth of innovation is that it starts with entrepreneurs, but it really starts with people having fun. The Wright brothers weren't trying to build an airline, they were saying, "Holy shit, do you think we could fly?" The first kids who made snowboards, they just glued skis together and said, "Let's try this!" With the web, none of us thought there was money in it. People said, "This document came from halfway around the world.
How awesome is that! " Illustration: Andrew Zbihlyj; David Brabyn/Corbis As a Stanford professor in the 1970s, Cerf co-invented TCP/IP with Bob Kahn. He helped pioneer packet-switching and went on to lead development of email and data infrastructure at MCI. In 2005, he was awarded the Presidential Medal of Freedom. Cerf is now chief Internet evangelist at Google.
I like Alan Kay's comment "The best way to predict the future is to invent it." Sometimes spotting the future is really a question of realizing what's now possible and actually trying it out. In my case, working with Bob Kahn, what became the Internet was not possible until certain economic conditions were satisfied—equipment had to be affordable, certain kinds of technology had to be readily available. So some things get invented because it is suddenly possible to invent them.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Illustration: Andrew Zbihlyj; Aleshkovsky Mitya/Corbis A former Googler, tech executive, and venture capital attorney, Sacca invests in early-stage startups through his firm, Lowercase Capital. His portfolio companies include Facebook, Instagram, Posterous, Twitter, and Uber.
How do I spot the future? Two words: flux capacitor. No, really—I think we venture capitalists get too much credit for predicting the future. We can look very prescient when we talk about why we invested in a company, but we're wrong more than we're right. It just turns out that when we're right, we're really, really right.
It used to be that when you invested in a company, you looked at a business plan. But now we don't have to invest in ideas anymore; now I invest in live URLs and apps that I can download. Plus, the users do the due diligence for us. I search Twitter to see what actual users are saying about something I want to invest in: Is it buggy? Is it a pain in the ass? Are they evangelizing it? After seeing hundreds of positive mentions of Heroku on Twitter, I was in. Salesforce ended up buying it for $225 million.
Another thing I do: I walk around Best Buy every three to four weeks and watch people. When you do this, you see how normal people make product decisions, what their price breaking points might be. In a world of people who've got stock options, there isn't a difference between a $80 thing and a $110 thing, but for real people working hourly wages, there is a huge difference.
Illustration: Andrew Zbihlyj; Noah Berger/Getty Ito is director of the MIT Media Lab and the former CEO of Creative Commons. He was an early-stage investor in Flickr, Twitter, and Kickstarter.
I believe in serendipity, and in the strength of weak ties. I connect with people from different fields and different places and always use pattern recognition and peripheral vision to spot opportunities in unlikely places.
Agility is essential. Your ability to respond to a suddenly emerging trend is most important. During the financial crisis, the companies that were successful were prepared for anything. Most of the people had prepared for the wrong things. By being agile and having your antennas out, you can react when you see the trend starting, rather than relying on these multiyear, multimillion-dollar analyses on the future of X. Instead of being a futurist, you want to be a nowist.
Illustration: Andrew Zbihlyj; Rob Monk/Getty A cofounder of Global Business Network and a senior vice president at Salesforce.com, Schwartz is an expert in scenario planning and the author of several books, including The Art of the Long View and The Long Boom.
You look for technologies that are likely to create major inflection points—breaks in a trend, things that are going to accelerate. Those tend to be very powerful. This is especially true with scientific technology and tools. For example, we are seeing the speed and cost of DNA testing falling dramatically—there's now a $1,000 DNA tester. That's clearly going to create an inflection point in the health care curve.
Another way to anticipate change is to watch where scientific talent is heading. Science advances in part by attracting talented people. So if an area is attracting great talent and money from governments and companies, you can expect to see important change.
Illustration: Andrew Zbihlyj; James Leynse/Corbis Also in this issue The Man Who Makes the Future How to Spot the Future The Rise of the Robot Reporter X Topics 20.05 Future Shock People Andy Greenberg Jason Parham Aarian Marshall Lauren Goode Andy Greenberg Kari McMahon Matt Reynolds David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
781 | 2,012 | "How to Spot the Future | WIRED" | "https://www.wired.com/epicenter/2012/04/ff_spotfuture" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Thomas Goetz How to Spot the Future Photo: Brock Davis Save this story Save Save this story Save Thirty years ago, when John Naisbitt was writing Megatrends , his prescient vision of America's future, he used a simple yet powerful tool to spot new ideas that were bubbling in the zeitgeist: the newspaper. He didn't just read it, though. He took out a ruler and measured it. The more column inches a particular topic earned over time, the more likely it represented an emerging trend. "The collective news hole," Naisbitt wrote, "becomes a mechanical representation of society sorting out its priorities"—and he used that mechanism to predict the information society, globalism, decentralization, and the rise of networks.
Also in this issue 8 Visionaries on How They Spot the Future The Man Who Makes the Future The Rise of the Robot Reporter As clever as Naisbitt's method was, it would never work today. There's an infinite amount of ink and pixels spilled on most any topic. These days, spotting the future requires a different set of tools. That's why at Wired , where we constantly endeavor to pinpoint the inventions and trends that will define the future, we have developed our own set of rules. They allow us to size up ideas and separate the truly world-changing from the merely interesting. After 20 years of watching how technology creates a bold and better tomorrow, we have seen some common themes emerge, patterns that have fostered the most profound innovations of our age.
This may sound like a paradox. Surely technology always promises something radically new, wholly unexpected, and unlike anything anybody has seen before. But in fact even when a product or service breaks new ground, it's usually following a familiar trajectory. After all, the factors governing thermodynamics, economics, and human interaction don't change that much. And they provide an intellectual platform that has allowed technology to succeed on a massive scale, to organize, to accelerate, to connect.
So how do we spot the future—and how might you? The seven rules that follow are not a bad place to start. They are the principles that underlie many of our contemporary innovations. Odds are that any story in our pages, any idea we deem potentially transformative, any trend we think has legs, draws on one or more of these core principles. They have played a major part in creating the world we see today. And they'll be the forces behind the world we'll be living in tomorrow.
Look for cross-pollinators.
It's no secret that the best ideas—the ones with the most impact and longevity—are transferable; an innovation in one industry can be exported to transform another. But even more resonant are those ideas that are cross-disciplinary not just in their application but in their origin.
This notion goes way back. When the mathematician John von Neumann applied mathematics to human strategy, he created game theory—and when he crossed physics and engineering, he helped hatch both the Manhattan Project and computer science. His contemporary Buckminster Fuller drew freely from engineering, economics, and biology to tackle problems in transportation, architecture, and urban design.
Sometimes the cross-pollination is potent enough to create entirely new disciplines. This is what happened when Daniel Kahneman and Amos Tversky started to fuse psychology and economics in the 1970s. They were trying to understand why people didn't behave rationally, despite the assumption by economists that they would do so. It was a question that economists had failed to answer for decades, but by cross-breeding economics with their own training as psychologists, Kahneman and Tversky were able to shed light on what motivates people. The field they created—behavioral economics—is still growing today, informing everything from US economic policy to the produce displays at Whole Foods.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg More recently, the commonalities between biology and digital technology—code is code, after all—have inspired a new generation to reach across specialties and create a range of new cross-bred disciplines: bioinformatics, computational genomics, synthetic biology, systems biology. All these fields view biology as a technology that can be manipulated and industrialized. As Rob Carlson, founder of Biodesic and a pioneer in this arena, puts it, "The technology we use to manipulate biological systems is now experiencing the same rapid improvement that has produced today's computers, cars, and airplanes." These similarities and common toolsets can accelerate the pace of innovation.
The same goes for old industries, as well. The vitality we see in today's car industry resulted from the recognition that auto manufacturing isn't a singular industry siloed in Detroit. In the past decade, car companies have gone from occasionally dispatching ambassadors to Silicon Valley to opening lab space there—and eagerly incorporating ideas from information technology and robotics into their products. When Ford CEO Alan Mulally talks about cars as the "all-time mobile application," he's not speaking figuratively—he's trying to reframe the identity of his company and the industry. That's testimony to a wave of cross-pollination that will blur the line between personal electronics and automobiles.
The point here is that by drawing on threads from several areas, interdisciplinary pioneers can weave together a stronger, more robust notion that exceeds the bounds of any one field. (One caveat: Real cross-pollination is literal, not metaphorical. Be wary of flimflam futurists who spin analogies and draw equivalences without actually identifying common structures and complementary systems).
Cross-pollination can be potent enough to generate entirely new disciplines.
Photo: Brock Davis Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Surf the exponentials.
Some trends are so constant, they verge on cliché. Just mentioning Moore's law can cause eyes to roll, but that overfamiliarity doesn't make Gordon Moore's 1965 insight—that chips will steadily, exponentially get smaller, cheaper, faster—any less remarkable. Not only has it been the engine of the information age, it has also given us good reason to believe in our capacity to invent our future, not just submit to it. After all, Moore's law doesn't know which silicon innovation will take us to the next level. It just says that if the previous 50 years are any indication, something will come along. And so far, it always has.
Moore's law has been joined by—and has itself propelled—exponential progress in other technologies: in networks, sensors, and data storage (the first iPod, in 2001, offered 5 gigabytes for $399, while today's "classic" model offers 160 gigs for $249, a 51-fold improvement). Each of these cyclically improving technologies creates the opportunity to "surf exponentials," in the words of synthetic biologist Drew Endy—to catch the wave of smaller, cheaper, and faster and to channel that steady improvement into business plans and research agendas.
This was the great insight that inspired YouTube, when cofounder Jawed Karim realized (while reading Wired, it so happens) that broadband was becoming so cheap and ubiquitous that it was on the verge of disrupting how people watched videos. And it's what Dropbox did with digital storage. As the cost of disc space was dropping at an exponential rate, Dropbox provided a service capitalizing on that phenomenon, offering to store people's data in the cloud, gratis. In 2007 the two free gigabytes the company offered were really worth something. These days 2 gigs is a pittance, but it remains enough of a lure that people are still signing up in droves—some fraction of whom then upgrade to the paid service and more storage.
And it's what allowed Fitbit to outdo Nike+. As accelerometers dropped in cost and size, Fitbit could use them to measure not just jogging, but any activity where movement matters, from walking to sleep. For all its marketing muscle, Nike didn't recognize that accelerometers were the dynamo of a personal health revolution. The new FuelBand shows that the company has now caught on, but Fitbit recognized the bigger trend first.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Exponentials, it turns out, are everywhere. Just choose one, look where it leads, and take a ride.
Favor the liberators.
Liberation comes in two flavors. First are those who recognize an artificial scarcity and move to eliminate it by creating access to goods. See the MP3 revolutionaries who untethered music from the CD, or the BitTorrent anti-tyrannists who created real video-on-demand.
Sometimes, of course, the revolution takes longer than expected. Back in 1993, George Gilder pointed out in these pages that the cost of bandwidth was plummeting so fast as to be imminently free. Gilder's vision has been proven correct, paving the way for Netflix and Hulu. And yet telcos are today—still!—trying to throttle bandwidth. But this is just biding time on the scaffold. In the words of investor Fred Wilson, "scarcity is a shitty business model." The second flavor of liberation takes a more subtle approach to turning scarcity into plenty. These liberators use the advent of powerful software to put fallow infrastructure to work. Think of how Netflix piggybacked on a national distribution infrastructure by having the US Postal Service carry its red envelopes. Or how the founders of Airbnb recognized our homes as a massive stock of underutilized beds, ready to be put into the lodging market. Or how Uber turns idling drivers into on-call icons on a Google map, blipping their way to you in mere minutes. Reid Hoffman, the philosopher-investor, describes these companies as bringing liquidity to locked-up assets. He means this in the financial sense of "liquidity," the ability to turn capital into currency, but it also works in a more evocative sense. These companies turn static into flow, bringing motion where there was obstruction.
What's it like to live in the future? Ask an Uber driver—these guys are electrons pulsing through a real-life network, and they're delighted by it. So should we all be.
The best companies are liberators. They bring motion where there was obstruction.
Photo: Brock Davis Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Give points for audacity.
When "big hairy audacious goal" entered the lexicon in 1994 (courtesy of Built to Last, the management tome by James Collins and Jerry Porras*),* it applied to ambitious executives eager to set high targets for annual revenue growth and increased market share. Yawn. But the term—shortened to BHAG—also coincided with the birth of the web, when innovators began to posit a whole new sort of audacity: to make every book, in every language, available in less than a minute; to organize all the world's information; or to make financial transactions frictionless and transparent.
Audacity is easily written off as naïveté, as overshooting your resources or talents. And that's a danger. Plenty of would-be Napoleons have called for revolutions that never found an army. But you can't make the future without imagining what it might look like.
Too much of the technology world is trying to build clever solutions to picayune problems. Better parking apps or restaurant finders might appeal to venture capitalists looking for a niche, but they are not ideas that seed revolutions. Instead, take a lesson from Tesla Motors, which had the pluck to spend $42 million of its precious capital to buy a factory roughly the size of the Pentagon, stock it with state-of-the-art robots, and begin making wholly viable electric cars. Or look to Square, which has pronounced the cash register a counter-cluttering vestige of the 19th century and created an alternative that will not only make buying things easier but will deliver retailers from their sclerotic relationship with credit card companies.
These times especially call for more than mere incrementalism. Let's demand that our leaders get in over their heads, that they remain a little bit naive about what they're getting into. As venture capitalist Peter Thiel told wired two years ago, "Am I right and early, or am I just wrong? You always have to wonder." This kind of willingness to take a chance and be early is what keeps the world moving.
Bank on openness.
In 1997 Wired's founding executive editor, Kevin Kelly, wrote a story called "New Rules for the New Economy" (it was in many ways the inspiration for this very piece). His focus was on networks, the "thickening web" that was forging connections of catalytic power. Many of his radical rules have become commonalities today, but two of them are just coming into their own: Connected individuals with shared interests and goals, he argued, create "virtuous circles" that can produce remarkable returns for any company that serves their needs. And organizations that "let go at the top"—forsaking proprietary claims and avoiding hierarchy—will be agile, flexible, and poised to leap from opportunity to opportunity, sacrificing short-term payoffs for long-term prosperity. Since Kelly wrote his piece, these forces have flourished. Back then open source software was a programming kibbutz, good for creating a hippy-dippy operating system but nothing that could rival the work of Oracle or Microsoft. Today open source is the default choice for corporations from IBM to Google. Even Microsoft is on board, evangelizing Hadoop and Python and opening the Xbox Kinect controller so it can be a platform for artists and roboticists. Supported by coder clubhouses like SourceForge and GitHub, collaborative circles can emerge with stunning spontaneity, responding elastically to any programming need.
More tellingly, in many organizations openness itself has become a philosophical necessity, the catalyst that turns one employee's lark into a billion-dollar business. Companies from Lego to Twitter have created a product and then called on its users to chart its course, allowing virtuous circles to multiply and flourish. Time after time, the open option has prevailed, as Zipcar has gained on Hertz and users have upvoted Reddit over Digg.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The best example may be nearly invisible, even to a dedicated user of the Internet: blogging platforms. Less than a decade ago there were a multitude of services competing for the emerging legion of bloggers: Movable Type, TypePad, Blogger, WordPress. Today, only the last two remain relevant, and of these, the small, scrappy WordPress is the champ. WordPress prevailed for several reasons. For one, it was free and fantastically easy to install, allowing an aspiring blogger (or blogging company) to get off the ground in hours. Users who wanted a more robust design or additional features could turn to a community of fellow users who had created tools to meet their own needs. And that community didn't just use WordPress—many made money on it by selling their designs and plug-ins. Their investment of time and resources emboldened others, and soon the WordPress community was stronger than any top-down business model forged inside the walls of their competition.
Sure, there are Apples and Facebooks that thrive under the old rules of walled gardens and monocultures. But even they try to tap into openness (albeit on their own terms) by luring developers to the App Store and the Open Graph. And for all the closed-world success of these companies, the world at large is moving the other way: toward transparency, collaboration, and bottom-up innovation. True openness requires trust, and that's not available as a plug-in. When transparency is just a marketing slogan, people can see right through it.
Set audacious goals—and don’t worry about getting in over your head.
Photo: Brock Davis Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Demand deep design.
Too often in technology, design is applied like a veneer after the hard work is done. That approach ignores how essential design is in our lives. Our lives are beset by clutter, not just of physical goods but of ideas and options and instructions—and design, at its best, lets us prioritize. Think of a supremely honed technology: the book. It elegantly organizes information, delivering it in a compact form, easily scanned asynchronously or in one sitting. The ebook is a worthy attempt to reverse-engineer these qualities—a process that has taken decades and chewed up millions in capital. But still, despite the ingenuity and functionality of the Kindle and the Nook, they don't entirely capture the charms of the original technology. Good design is hard.
Indeed, good design is much, much harder than it looks. When Target redesigned its prescription pill bottle in 2005, the improvement was instantly recognizable—an easy-to-read label that plainly explains what the pill is and when to take it. It was a why-didn't-I-think-of-it innovation that begged to be replicated elsewhere. But judging by the profusion of products and labels that continue to baffle consumers, it has been largely ignored. Same with Apple: The company's design imperative is forever cited as intrinsic to its success, but Apple still stands curiously alone as a company where engineers integrate design into the bones of its products.
Thankfully, we are on the verge of a golden age of design, where the necessary tools and skills—once such limited resources—are becoming automated and available to all of us. This timing is critical. "Too much information" has become the chorus of complaint from all quarters, and the cure is not more design but deeper design, design that filters complexity into accessible units of comprehension and utility. Forget Apple's overpraised hardware aesthetic; its greatest contribution to industrial design was to recognize that nobody reads user's manuals. So it pretty much eliminated them. You can build as many stunning features into a product as you like; without a design that makes them easy to use, they may as well be Easter eggs.
No company has managed this better than Facebook, which outstripped MySpace because it offered constraint over chaos and rigor over randomness. Facebook has tweaked its interface half a dozen times over the years, but it has never lost the essential functionality that users expect. Indeed, its redesigns have been consistently purposeful. Each time, the company's goal has been to nudge users to share a little more information, to connect a little more deeply. And so every change has offered tools for users to better manage their information, making it easier to share, organize, and access the detritus of our lives. Privacy concerns aside, Facebook has helped people bring design into their lives as never before, letting us curate our friends, categorize our family photos, and bring (at least the appearance of) continuity to our personal histories. Services like Pinterest only make this more explicit. They promise to let us organize our interests and inspirations into a clear, elegant form. They turn us into designers and our daily experience into a lifelong project of curation. This is deep design commoditized—the expertise of IDEO without the pricey consulting contract. And done right, it is irresistible.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Spend time with time wasters.
The classic business plan imposes efficiency on an inefficient market. Where there is waste, there is opportunity. Dispatch the engineers, route around the problem, and boom—opportunity seized.
That's a great way to make money, but it's not necessarily a way to find the future. A better signal, perhaps, is to look at where people—individuals—are being consciously, deliberately, enthusiastically inefficient.
In other words, where are they spending their precious time doing something that they don't have to do? Where are they fiddling with tools, coining new lingo, swapping new techniques? That's where culture is created. The classic example, of course, is the Homebrew Computer Club—the group of Silicon Valley hobbyists who traded circuits and advice in the 1970s, long before the actual utility of personal computers was evident. Out of this hacker collective grew the first portable PC and, most famously, Apple itself.
This same phenomenon—people playing—has spurred various industries, from videogames (thank you, game modders) to the social web (thank you, oversharers). Today, inspired dissipation is everywhere. The maker movement is merging bits with atoms, combining new tools (3-D printing) with old ones (soldering irons). The DIY bio crowd is using off-the-shelf techniques and bargain-basement lab equipment, along with a dose of PhD know-how, to put biology into garage lab experiments. And the Quantified Self movement is no longer just Bay Area self-tracking geeks. It has exploded into a worldwide phenomenon, as millions of people turn their daily lives into measurable experiments.
The phenomenon of hackathons, meanwhile, converts free time into a development platform. Hackathons harness the natural enthusiasm of code junkies, aim it at a target, and create a partylike competition atmosphere to make innovation fun. (And increasingly hackathons are drawing folks other than coders.) No doubt there will be more such eruptions of excitement, as the tools become easier, cheaper, and more available.
These rules don't create the future, and they don't guarantee success for those who use them. But they do give us a glimpse around the corner, a way to recognize that in this idea or that person, there might be something big.
Thomas Goetz ( [email protected] ) is the executive editor of Wired.
Topics 20.05 Future Shock Innovation Steven Levy Will Knight Boone Ashworth Andy Greenberg Boone Ashworth Ramin Skibba Eric Ravenscraft Adrienne So Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
782 | 2,012 | "The Man Who Makes the Future: Wired Icon Marc Andreessen | WIRED" | "https://www.wired.com/epicenter/2012/04/ff_andreessen" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Chris Anderson The Man Who Makes the Future: Wired Icon Marc Andreessen Photo: Nigel Parry Save this story Save Save this story Save He's not a household name like Gates, Jobs, or Zuckerberg. His face isn't known to millions. But during his remarkable 20-year career, no one has done more than Marc Andreessen to change the way we communicate. At 22, he invented Mosaic, the first graphical web browser—an innovation that is perhaps more responsible than any other for popularizing the Internet and bringing it into hundreds of millions of homes. He cofounded Netscape and took it public in a massive (for that time) stock offering that helped catalyze the dotcom boom. He started Loudcloud, a visionary service to bring cloud computing to business clients. And more recently, as a venture capitalist, he has backed an astonishing array of web 2.0 companies, from Twitter to Skype to Groupon to Instagram to Airbnb.
Also in this issue How to Spot the Future 8 Visionaries on How They Spot the Future The Rise of the Robot Reporter As Wired prepares for its 20th anniversary issue in January 2013, we are launching a series called Wired Icons: in-depth interviews with our biggest heroes, the tenacious pioneers who built digital culture and evangelized it to the world over the past two decades. There's not a more fitting choice for our first icon than Andreessen—a man whose career, which almost exactly spans the history of our magazine, is a lesson in how to spot the future. In an interview at Andreessen's office in Palo Alto, California, Wired editor in chief Chris Anderson talked with him about technological transformation, and about the five big ideas that Andreessen had before everyone else.
Idea one As a 22-year-old undergraduate at the University of Illinois, Andreessen developed Mosaic, the first graphical browser for the World Wide Web, then brought the technology to Silicon Valley and cofounded Netscape. By August 1995, Netscape had gone public and was worth $2.9 billion.
Chris Anderson: At 22, you're a random kid from small-town Wisconsin, working at a supercomputer center at the University of Illinois. How were you able to see the future of the web so clearly? Marc Andreessen: It was probably the juxtaposition of the two—being from a small town and having access to a supercomputer. Where I grew up, we had the three TV networks, maybe two radio stations, no cable TV. We still had a long-distance party line in our neighborhood, so you could listen to all your neighbors' phone calls. We had a very small public library, and the nearest bookstore was an hour away. So I came from an environment where I was starved for information, starved for connection.
Anderson: And then at Illinois, you found the Internet.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Andreessen: Right, which could make information so abundant. The future was much easier to see if you were on a college campus. Remember, it was feast or famine in those days. Trying to do dialup was miserable. If you were a trained computer scientist and you put in a tremendous amount of effort, you could do it: You could go get a Netcom account, you could set up your own TCP/IP stack, you could get a 2,400-baud modem. But at the university, you were on the Internet in a way that was actually very modern even by today's standards. At the time, we had a T3 line—45 megabits, which is actually still considered broadband. Sure, that was for the entire campus, and it cost them $35,000 a month! But we had an actual broadband experience. And it convinced me that everybody was going to want to be connected, to have that experience for themselves.
Anderson: But the notion that everyday consumers would want it over dialup—that was pretty radical.
Andreessen: True. At the time, there were four presumptions made against dialup Internet access, and after Mosaic took off I could see that they were all wrong. The first presumption was that dialup flat-out wouldn't work.
Anderson: That it would always be too slow, too clunky.
Andreessen: Right. The second presumption was that it was too expensive—and that it would always stay as expensive as it was. The third presumption was that people wouldn't be smart enough to figure out how to get it working at home. But the most interesting presumption was the fourth one: that consumers wouldn't want it, that they wouldn't know what to do with it.
Anderson: Your big idea, really, was that they would want it—and they'd eventually get it.
Andreessen: Yeah. It was essentially knocking through all four of those assumptions. I thought it was obvious that everyone would want this and that they would be able to do lots of things with it. And I thought it was obvious that the technology would advance to a point where you wouldn't need a computer science degree to do it.
Anderson: Which was the one problem you could do something about.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Andreessen: Well, actually, I think that Mosaic helped address a few of the problems at once. It did make the Internet much easier to use. But making it easier to use also made it more apparent how to use it, all the different things that people could do with it—which then made people want it more. And it's also clear that we helped drive faster bandwidth: By creating the demand, we helped increase the supply.
Anderson: I remember the first time I interviewed you, back in 1995 when I was at The Economist. I thought we were going to talk about, you know, TCP/IP and HTTP. But you wanted to talk about globalization, about international trade. You were already thinking about the Internet in macroeconomic terms. Have you always seen the world that way, or was there an awakening somewhere in the process? Andreessen: The awakening probably happened for me during that period. Once you understand that everybody's going to get connected, a lot of things follow from that. If everybody gets the Internet, they end up with a browser, so they look at web pages—but they can also leave comments, create web pages. They can even host their own server! So not only is everybody consuming, they can also produce. And once you get instantaneous communication with everybody, you have economic activity that's far more advanced, far more liquid, far more distributed than ever before.
Anderson: Looking back on the browser after 20 years, what are the biggest surprises? What did you not expect? Andreessen: Number one, that it worked. The big turning point for me was when Mosaic worked. I was like, wait a minute, you can actually change the world! Anderson: But you got that surprise early on. Mosaic was a huge success within 12 months.
Andreessen: Yeah, that's true. But the second surprise is that it has kept working. Notwithstanding certain cover stories in certain magazines, I think the browser is as relevant today as it's ever been.
Idea two During the browser wars with Microsoft, when Netscape Navigator and Internet Explorer vied for domination on the PC desktop, Andreessen prophesied a future where computers would dispense with feature-heavy operating systems entirely. Instead, we would use a browser to run programs over the network. Netscape lost its battle with Microsoft, but in key respects Andreessen's vision has come to pass. Google Chrome OS, for example, is a fully browser-based operating system, while most of our favorite applications, from email to social networks, now live entirely on the network.
Anderson (left) and Andreessen in Palo Alto in January 2012.
Photo: Nigel Parry Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: A quote of yours that I've always loved is that Netscape would render Windows "a poorly debugged set of device drivers." Andreessen: In fairness, you have to give credit for that quote to Bob Metcalfe, the 3Com founder.
Anderson: Oh, it wasn't you? It's always attributed to you.
Andreessen: I used to say it, but it was a retweet on my part. [ Laughs.
] But yes, the idea we had then, which seems obvious today, was to lift the computing off of each user's device and perform it in the network instead. It's something I think is inherent in the technology—what some thinkers refer to as the "technological imperative." It's as if the technology wants it to happen.
Anderson: As in Stewart Brand's famous formulation that "information wants to be free." Andreessen: Right. Technology is like water; it wants to find its level. So if you hook up your computer to a billion other computers, it just makes sense that a tremendous share of the resources you want to use—not only text or media but processing power too—will be located remotely. People tend to think of the web as a way to get information or perhaps as a place to carry out ecommerce. But really, the web is about accessing applications. Think of each website as an application, and every single click, every single interaction with that site, is an opportunity to be on the very latest version of that application. Once you start thinking in terms of networks, it just doesn't make much sense to prefer local apps, with downloadable, installable code that needs to be constantly updated.
"We could have built a social element into Mosaic. But back then the Internet was all about anonymity." Anderson: Assuming you have enough bandwidth.
Andreessen: That's the very big if in this equation.
If you have infinite network bandwidth, if you have an infinitely fast network, then this is what the technology wants. But we're not yet in a world of infinite speed, so that's why we have mobile apps and PC and Mac software on laptops and phones. That's why there are still Xbox games on discs. That's why everything isn't in the cloud. But eventually the technology wants it all to be up there.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: Back in 1995, Netscape began pursuing this vision by enabling the browser to do more.
Andreessen: We knew that you would need some processing to stay on the computer, so we invented JavaScript. And then we also catalyzed Java, which enabled far more sophisticated applications in the network, by building support for Java into the browser. The basic idea, which remains in force today, is that you do some computation on the device, but you want the server application to be in control of that. And the whole process is completely invisible to the user.
Anderson: Unlike with Mosaic, where your original ideas were proven correct within a year, it seems like this idea has taken 15 years to come to fruition.
Andreessen: Right. And only with the arrival of tablets and smartphones, really. If you draw a pie chart of all the personal computing devices in use, smartphones and tablets are now over 50 percent and growing rapidly. It took a lot longer than we expected, but these really are the network computers. Now, in an ironic twist of fate, the devices do have all these local apps ...
Anderson: Well, exactly.
Andreessen:...
but I can go on an iPad or an Android smartphone or a Linux tablet and I can access all the same websites and all the same applications and all the same services that I get on my desktop.
Anderson: But we do still have lots of desktops and laptops out there. Let me ask you in 2012: Do you still think that the web and browsers will render computer operating systems a "poorly debugged set of device drivers"? Andreessen: I will pull a full Henry Kissinger and answer a different question. The application model of the future is the web application model. The apps will live on the web. Mobile apps on platforms like iOS and Android are a temporary step along the way toward the full mobile web. Now, that temporary step may last for a very long time. Because the networks are still limited. But if you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model. Because the technology wants it to work that way.
Idea Three In September 1999, Andreessen cofounded Loudcloud, a firm that would enable whole businesses to move into the cloud; it would host and manage their web services and software so that companies wouldn't need to run any servers at all. That business didn't last—despite an IPO in 2001, Loudcloud changed its name and business model in 2002 and was sold to Hewlett-Packard in 2007. But its vision has been vindicated in the phenomenal rise of Amazon Web Services, which serves as the backbone for hundreds of thousands of businesses online.
Andreessen on the cover of Time in 1996.
Courtesy of: Time Magazine Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: With the name Loudcloud, did you make the first use of the word cloud in this context—as a place where applications run on the network? Andreessen: It was a common term in the telecom business. AT&T used it to talk about their Centrex service, which—going way back here—took all the hassles of switching phone calls out of the individual enterprise and turned it into a service. So our idea with Loudcloud was to offer a similar proposition, but for software. When we first announced it, I described it as Silicon Valley Power & Light.
Anderson: Tech companies would use it as a utility.
Andreessen: Exactly: the software power grid. We actually used the electrical metaphor more than the telecom metaphor. When electricity first came to factories, every factory had its own generator. But eventually that didn't make any sense, because everyone could draw electricity off the grid. At the height of the first dotcom boom, we saw the exact same thing happening in Silicon Valley. You'd raise $20 million of venture capital, and then you'd have to turn around and write $5 million checks to Oracle, Sun, EMC, and Cisco just to build out your server farm. It was literally like everybody building their own electrical generator over and over again.
Anderson: You were the first company to provide software as a service.
Andreessen: I would say we were the first cloud provider in the modern sense of the term. Our pitch was, you should be able to buy all this software by the drink, instead of having to shell out for the bottle up front. By capitalizing on economies of scale, Loudcloud could provide higher levels of service than you could get in-house, and a startup could get its product to market almost instantaneously. It could spend its time and energy building the actual product instead of trying to figure out how to host it and keep it live. That was the pitch.
Anderson: It didn't really work.
Andreessen: Well, it worked beautifully right up to the point when all the startups went bankrupt, and then all our big clients decided they didn't have to worry about competing with the startups anymore. After that, it went completely sideways. Literally every other company we were competing with went bankrupt; we were the only one that got through it. So we just went back to basics and we said, OK, we couldn't make it work as a service provider, but we think we can make it work as a software company, selling the back-end software to manage big networks of servers. We changed our name to Opsware. That ultimately worked, as a business.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: You were acquired by HP for $1.6 billion.
Andreessen: That whole transition happened during an unfun time in the tech economy. Everybody went through a crisis of confidence between 2002 and 2006. Up and down Sand Hill Road, VCs would refuse to fund consumer Internet companies, because it had been decided that those simply weren't going to work.
Anderson: Looking back, it's somewhat ironic that you started with the right name, Loudcloud, but abandoned it. Now the world has come back to cloud. What did it take? Andreessen: In retrospect, we were five or six years too early. Besides the rebound in the startup economy, there have also been two huge developments in server technology. The first is commoditization: We were running on expensive Sun servers, but now you can buy Linux servers at a fraction of the cost. The second is virtualization, which makes managing the servers and apportioning services to clients far easier than was possible back in 1999. And that's why Amazon's cloud service has been so magical. It's the same core concept—but with supercheap hardware, which makes the economics far more attractive for everybody, and with virtualization, which makes the entire environment far more adaptable.
idea four In 2004, when very few consumer Internet companies were getting funded, Andreessen cofounded Ning, a service to let groups of people create their own social apps. It was a modest success, but "social" has become just as ubiquitous as he predicted—increasingly, what we buy, what we listen to, even our search results are influenced by our friends' tastes and choices. And most of the successful startups in this arena, from Facebook to Groupon to Instagram, have Andreessen as an investor or board member.
Andreessen on the cover of Wired in 2000.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: Your bet on Ning hasn't paid off as handsomely as your previous two companies did, but you did bet correctly on a future where social would be knit into everything. What was your thinking around that venture? Andreessen: In the 1990s, lots of people talked about Moore's law, which predicts that processing speed will increase exponentially, and Metcalfe's law, which holds that a network gets exponentially more valuable as nodes are added. But I was also fascinated with Reed's law. That's a mathematical property about the forming of groups—for any group of size n , the number of subgroups that can be assembled is 2 n.
Anderson: So the bigger the network gets, the more subnetworks that will want to organize themselves—a richer and more varied set of social groups.
Andreessen: We see this playing out in retail, where ecommerce is becoming a group activity. Long before Ning, actually, in 1999, I invested in a company called Mobshop, which was Reed's law applied to commerce, through group sales. It didn't work back then. But 10 years later, I invested in Groupon, because I could see it was the same idea—finding, on the fly, a group of people who want the same product and using their massive numbers to command steep discounts. The Internet lets you aggregate groups in a way that was never previously possible.
Anderson: What changed between 1999 and 2009 that made Groupon—and Facebook, and all these other profitable consumer Internet companies—possible? Andreessen: A big part of it was broadband. Ironically, it was during the nuclear winter, from 2000 to 2005, that broadband happened. DSL got built out, cable modems got built out. So then you started to have 100, 200, 300 million people worldwide on broadband. Also, the international market started to really open up: China, India, Indonesia, Thailand, Turkey. Still, though, starting a new consumer Internet company in 2004 was a radical act. [ Laughs.
] Anderson: As I recall, your initial concept for Ning was to let groups create their own Craigslists, effectively—trusted marketplaces.
Andreessen: Yeah, at the time we had this concept of "social apps." Friendster hadn't worked, MySpace was just getting a little bit of traction, and Facebook was still at Harvard. What we knew worked were focused applications: Craigslist, eBay, Monster. So our idea was to bring social into these domains, in the form of apps that groups could run for themselves: their own job boards, their own selling marketplaces, and so on. Then later we sort of abstracted that up into the idea of building your own social network.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: In retrospect, it seems like social is another dimension of the Internet that was there from the beginning—as if the technology wanted it to happen.
Andreessen: I often wonder if we should have built social into the browser from the start. The idea that you want to be connected with your friends, your social circle, the people you work with—we could have built that into Mosaic. But at the time, the culture on the Internet revolved around anonymity and pseudonyms.
Anderson: You built in cookies so that sites could remember each user.
Andreessen: But we didn't build in the concept of identity. I think that might have freaked people out.
Anderson: It might still.
Andreessen: Yeah, I'm not sure at the time people were ready for it. I don't think it was an accident that it took, you know, 13 or 14 years after we introduced the browser for people to say, "I want my identity to be a standard part of this." Anderson: And it took Mark Zuckerberg to figure out how to make it pay off.
Andreessen: It was really a generational shift—a group of young entrepreneurs, including Andrew Mason and Mark Zuckerberg, who weren't burned by the dotcom boom and bust. I came to Ning with all these psychic scars. They just looked at the Internet and said, "This stuff is really cool, and we want to build something new." Anderson: No cynicism.
Andreessen: One of the first times Zuckerberg and I got together, in 2005 or 2006, he stopped me in the middle of conversation and asked: "What did Netscape do?" And I said, "What do you mean, what did Netscape do?" And he was like, "Dude, I was in junior high. I wasn't paying attention." Anderson: How big can Facebook get? Andreessen: We don't really know. The Internet is still the Wild West. Eight years ago, Facebook was just a gleam in a Harvard sophomore's eye. It is still possible to build these things from scratch. So I can't tell you what the top five platforms are going to be even five years from now. I'm pretty s ure that Facebook, Apple, and Google will be on that list. But I don't know what the other two will be. Maybe Microsoft comes roaring back with Windows Phone. Maybe Twitter evolves and gets to scale. HP is planning to open source its WebOS—maybe it's that! Or maybe it's something we haven't even heard of, a company that's just getting funded right now.
idea five In 2009, Andreessen and his longtime business partner, Loudcloud cofounder Ben Horowitz, created a venture capital firm called Andreessen Horowitz. Their vision today: an economy transformed by the rise of computing. Andreessen believes that enormous technology companies can now be built around the use of hyperintelligent software to revolutionize whole sectors of the economy, from retail to real estate to health care.
Photo: Nigel Parry Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: Take us back to when you were forming Andreessen Horowitz. You'd been an investor for some time already, but now you decided to formalize it. So what was the guiding philosophy? Andreessen: Our vision was to be a throwback: a Silicon Valley venture capital firm. We were going to be a single-office firm, focusing primarily on companies in the US and then, within that, primarily companies in Silicon Valley. And—this is the crucial thing—we're only going to invest in companies based on computer science, no matter what sector their business is in. We are looking to invest in what we call primary technology companies.
Anderson: Give me an example.
Andreessen: Airbnb—the startup that lets you rent out your home or a room in your home. Ten years ago you would never have said you could build Airbnb, which is looking to transform real estate with a new primary technology. But now the market's big enough.
Anderson: I guess I'm struggling a little bit with "primary technology." How does Airbnb qualify? Andreessen: Airbnb makes its money in real estate. But everything inside of how Airbnb runs has much more in common with Facebook or Google or Microsoft or Oracle than with any real estate company. What makes Airbnb function is its software engine, which matches customers to properties, sets prices, flags potential problems. It's a tech company—a company where, if the developers all quit tomorrow, you'd have to shut the company down. To us, that's a good thing.
Anderson: I'm probably a little bit elitist in this, but I think a "primary technology" would need to involve, you know, some fundamental new insight in code, some proprietary set of algorithms.
Andreessen: Oh, I agree. I think Airbnb is building a software technology that is equivalent in complexity, power, and importance to an operating system. It's just applied to a sector of the economy instead. This is the basic insight: Software is eating the world. The Internet has now spread to the size and scope where it has become economically viable to build huge companies in single domains, where their basic, world-changing innovation is entirely in the code. We've especially seen it in retail—with companies like Groupon, Zappos, Fab.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Amazon is a force for human progress and culture and economics in a way that Borders never was." Anderson: And these aren't copycats, or me-toos, but fundamentally new insights in software? Andreessen: Yes, absolutely. I have another theory that I call the missing campus puzzle. When you drive down highway 101 through Silicon Valley, you pass the Oracle campus and then the Google campus and then the Cisco campus. And some people think, wow, they're so big. But what I think is, I've been driving for close to an hour—why haven't I passed a hundred more campuses? Why is there all this open space? Anderson: What's your answer? Andreessen: Think about what it has meant to build a primary technology company up until now. In order to harness a large enough market, to attract the right kind of technical talent, to pay them adequately, to grow the company to critical mass—until now that's only been possible with companies that are providing tools for all sectors, not just specific sectors. Technology has been just a slice of the economy. We've been making the building blocks to get us to today, when technology is poised to remake the whole economy.
Anderson: What categories are next? Andreessen: The next stops, I believe, are education, financial services, health care, and then ultimately government—the huge swaths of the economy that historically have not been addressable by technology, that haven't been amenable to the entrance of Silicon Valley-style software companies. But increasingly I think they're going to be.
Anderson: Today, so much software is instantiated in hardware—Apple being a great example. As software "eats the world," do you think that we'll see fewer companies like Apple that deliver their revolutionary software in the form of shiny objects? Andreessen: Yes, but I'm not a purist. In fact, we're funding some hardware companies. Let me give two examples. The first is Jawbone—they make portable speakers, noise-canceling headsets, and now a wristband that tracks your daily movements. Jawbone is an Apple-style company, in that it has genius in hardware and marketing as well as in software design. But if you took away the software, you'd have nothing.
Anderson: What's the second? Andreessen: The other one is Lytro, which is making light-field cameras—this amazing new technology that lets you capture the whole depth of field in three dimensions and then focus and compose your picture later.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Anderson: It's a computer science company.
Andreessen: Yeah, it's computer science. But it's going to ship as a camera. And before I met Ren Ng, the founder, if you had asked me if we'd ever back a camera company, I would have said you're smoking crack.
Anderson: There's an app for that! Andreessen: And Kodak filed for bankruptcy. But what Ren has is a completely different approach to photography. There's a lot of hardware engineering that goes into it, but 90 percent of the intellectual property is software. So we look at Lytro and we look at Jawbone and we see software expressed as hardware—highly specialized hardware that will be hard to clone.
Anderson: One last question for you. Software eating the world is dematerialization, in some sense: These sectors of the economy get transformed into coding problems. But I'm wondering whether there is an economic path by which dematerialization leads to demonetization—where the efficiency of the software sucks economic value out of the whole system. Take Craigslist, for example: For every million that Craigslist made, it took a billion out of the newspaper industry. If you transform these big, inefficient industries in such a way that the value all accrues to a smaller software company, what's the broad economic impact? Andreessen: My bet is that the positive effects will far outweigh the negatives. Think about Borders, the bookstore chain. Amazon drove Borders out of business, and the vast majority of Borders employees are not qualified to work at Amazon. That's an actual, full-on problem. But should Amazon have been prevented from doing that? In my view, no. Because it's so much better to live in a world where that happened, it's so much better to live in a world where Amazon is ascendant. I told you that my childhood bookstore was something you had to drive an hour to get to. But it was a Waldenbooks, and it was, like, 800 square feet, and it sold almost nothing that you would actually want to read. It's such a better world where we have Amazon, where everything is universally available. They're a force for human progress and culture and economics in a way that Borders never was.
Anderson: So it's creative destruction.
Andreessen: When Milton Friedman was asked about this kind of thing, he said: Human wants and needs are infinite, and so there will always be new industries, there will always be new professions. This is the great sweep of economic history. When the vast majority of the workforce was in agriculture, it was impossible to imagine what all those people would do if they didn't have agricultural jobs. Then a hundred years later the vast majority of the workforce was in industrial jobs, and we were similarly blind: It was impossible to imagine what workers would do without those jobs. Now the majority are in information jobs. If the computers get smart enough, then what? I'll tell you: The then what is whatever we invent next.
Chris Anderson ( [email protected] ) is editor in chief of Wired.
He wrote about the death of the web in issue 18.09.
Topics 20.05 Entrepreneurs Innovation People WiredBiz2012 Steven Levy Jason Parham Steven Levy Angela Watercutter Lauren Goode Andy Greenberg Steven Levy Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
783 | 2,013 | "Facebook's 'Deep Learning' Guru Reveals the Future of AI | WIRED" | "https://www.wired.com/2013/12/facebook-yann-lecun-qa" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Facebook's 'Deep Learning' Guru Reveals the Future of AI Yann LeCun.
Photo: WIRED/Josh Valcarcel Save this story Save Save this story Save New York University professor Yann LeCun has spent the last 30 years exploring artificial intelligence, designing "deep learning" computing systems that process information in ways not unlike the human brain. And now he's bringing this work to Facebook.
Earlier this week, the social networking giant told the world it had hired the French-born scientist to head its new artificial intelligence lab , which will span operations in California, London, and New York. From Facebook's new offices on Manhattan's Astor Place, LeCun will oversee the development of deep-learning tools that can help Facebook analyze data and behavior on its massively popular social networking service -- and ultimately revamp the way the thing operates.
With deep learning, Facebook could automatically identify faces in the photographs you upload, automatically tag them with the right names, and instantly share them with friends and family who might enjoy them too. Using similar techniques to analyze your daily activity on the site, it could automatically show you more stuff you wanna see.
In some ways, Facebook and AI is a rather creepy combination. Deep learning provides a more effective means of analyzing your most personal of habits. "What Facebook can do with deep learning is unlimited," says Abdel-rahman Mohamed, who worked on similar AI research at the University of Toronto. "Every day, Facebook is collecting the network of relationships between people. It's getting your activity over the course of the day. It knows how you vote -- Democrat or Republican. It knows what products you buy." But at the same time, if you assume the company can balance its AI efforts with your need for privacy, this emerging field of research promises so much for the social networking service -- and so many other web giants are moving down the same road, including Google , Microsoft, and Chinese search engine Baidu.
"It's scary on one side," says Mohamed. "But on the other side, it can make our lives even better." This week, LeCun is at Neural Information Processing Systems Conference in Lake Tahoe -- the annual gathering of the AI community where Zuckerberg and company announced his hire -- but he took a short break from the conference to discuss his new project with WIRED.
We've edited the conversation for reasons of clarity and length.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WIRED : We know you're starting an AI lab at Facebook. But what exactly will you and the rest of your AI cohorts be working on? LeCun : Well, I can tell you about the purpose and the goal of the new organization: It's to make significant progress in AI. We want to do two things. One is to really make progress from a scientific point of view, from the side of technology. This will involve participating in the research community and publishing papers. The other part will be to, essentially, turn some of these technologies into things that can be used at Facebook.
But the goal is really long-term, more long-term than work that is currently taking place at Facebook. It's going to be somewhat isolated from the day-to-day production, if you will -- so that we give people some breathing room to think ahead. When you solve big problems like this, technology always comes out of it, along the way, that's pretty useful.
>'Mark Zuckerberg calls it the theory of the mind. How do we model -- in machines -- what human users are interested in and are going to do?' Yann LeCun WIRED : What might that technology look like? What might it do? LeCun : The set of technologies that we'll be working on is essentially anything that can make machines more intelligent. More particularly, that means things that are based on machine learning. The only way to build intelligent machines these days is to have them crunch lots of data -- and build models of that data.
The particular set of approaches that have emerged over the last few years is called "deep learning." It's been extremely successful for applications such as image recognition, speech recognition, and a little bit for natural language processing, although not to the same extent. Those things are extremely successful right now, and even if we just concentrated on this, it could have a big impact on Facebook. People upload hundreds of millions of pictures to Facebook each day -- and short videos and signals from chats and messages.
But our mission goes beyond this. How do we really understand natural language, for example? How do we build models for users, so that the content that is being shown to the user includes things that they are likely to be interested in or that are likely to help them achieve their goals -- whatever those goals are -- or that are likely to save them time or intrigue them or whatever. That's really the core of Facebook. It's currently to the point where a lot of machine learning is already used on the site -- where we decide what news to show people and, on the other side of things, which ads to display.
Mark Zuckerberg calls it the theory of the mind. It's a concept that has been floating in AI and cognitive science for a while. How do we model -- in machines -- what human users are interested in and are going to do? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg WIRED : The science at the heart of this is actually quite old, isn't it? People like you and Geoff Hinton, who's now at Google , first developed these deep learning methods -- known as "back-propogation" algorithms -- in the mid-1980s.
LeCun : That's the root of it. But we've gone way beyond that. Back-propagation allows us do what's called "supervised learning." So, you have a collection of images, together with labels, and you can train the system to map new images to labels. This is what Google and Baidu are currently using for tagging images in user photo collections.
That we know works. But then you have things like video and natural language, for which we have very little label data. We can't just show a video and ask a machine to tell us what's in it. We don't have enough label data, and it's not clear that we could -- even by spending a lot of time getting users to provide labels -- achieve the same level of performance that we do for images.
So, what we do is use the structure of the video to help the system build a model -- the fact that some objects are in front of each other, for example. When the camera moves, the objects that are in front move differently from those in the back. A model of the object spontaneously emerges from this. But it requires us to invent new algorithms, new "unsupervised" learning algorithms.
This has been a very active area of research within the deep learning community. None of us believe we have the magic bullet for this, but we have some things that sort of work and that, in some cases, improve the performance of purely supervised systems quite a lot.
WIRED : You mentioned Google and Baidu. Other web companies, such as Microsoft and IBM, are doing deep learning work as well. From the outside, it seems like all this work has emerged from a relatively small group of deep learning academics, including you and Google's Geoff Hinton.
LeCun : You're absolutely right -- though it is quickly growing, I have to say. You have to realize that deep learning -- I hope you will forgive me for saying this -- is really a conspiracy between Geoff Hinton and myself and Yoshua Bengio, from the University of Montreal. Ten years ago, we got together and thought we were really starting to address this problem of learning representations of the world, for vision and speech.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Originally, this was for things like controlling robots. But we got together and got some funding from a Canadian foundation called CIFAR, the Canadian Institute For Advanced Research. Geoff was the director, and I was the chair of the advisory committee, and we would get together twice a year to discuss progress.
It was a bit of a conspiracy in that the majority of the machine learning and computer communities were really not interested in this yet. So, for a number of years, it was confined to those workshops. But then we started to publish papers and we started to garner interest. Then things started to actually work well, and that's when industry started to get really interested.
The interest was much stronger and much quicker than from the academic world. It's very surprising.
>'You have to realize that deep learning -- I hope you will forgive me for saying this -- is really a conspiracy between Geoff Hinton and myself and Yoshua Bengio, from the University of Montreal' Yann LeCun WIRED : How do you explain the difference between deep learning and ordinary machine learning? A lot of people are familiar with the sort of machine learning that Google did over the first tens of its life, where it would analyze large amounts of data in an effort to, say, automatically identify web-spam.
LeCun : That's relatively simple machine learning. There's a lot of effort that goes into creating those machine learning systems, in the sense that the system is not able to really process raw data. The data has to be turned into a form that the system can digest. That's called a feature abstractor.
Take an image, for example. You can't feed the raw pixels into a traditional system. You have to turn the data into a form that a classifier can digest. This is what a lot of the computer vision community has been trying to do for the last twenty or thirty years -- trying to represent images in the proper way.
But what deep learning allows us to do is learn this representation process as well, instead of having to build the system by hand for each new problem. If we have lots of data and powerful computers, we can build a system that can learn what the appropriate data representation is.
A lot of the limitations of AI that we see today are due to the fact that we don't have good representations for the signal -- or the ones that we have take an enormous amount of effort to build. Deep learning allows us to do this more automatically. And it works better too.
Senior Writer X Topics artificial intelligence Baidu deep learning Enterprise Facebook Google Microsoft research Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
784 | 2,015 | "Facebook Launches M, Its Bold Answer to Siri and Cortana | WIRED" | "https://www.wired.com/2015/08/facebook-launches-m-new-kind-virtual-assistant" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jessi Hempel Business Facebook Launches M, Its Bold Answer to Siri and Cortana Save this story Save Save this story Save Facebook Today, a few hundred Bay Area Facebook users will open their Messenger apps to discover M, a new virtual assistant. Facebook will prompt them to test it with examples of what M can do: Make restaurant reservations. Find a birthday gift for your spouse. Suggest---and then book---weekend getaways.
It won’t take long for Messenger’s users to realize M can accomplish much more than your standard digital helper, suspects David Marcus, vice president of messaging products at Facebook. “It can perform tasks that none of the others can,” Marcus says. That’s because, in addition to using artificial intelligence to complete its tasks, M is powered by actual people.
Companies from Google to Taskrabbit are engineering products to act as superpowered personal assistants. Some, like Apple’s Siri, Google Now, or Microsoft’s Cortana, rely entirely on technology, and though they can be used by a lot of people, their range of tasks remains limited. Others, like startups Magic and Operator or gig-economy companies like TaskRabbit, employ people to respond to text-based requests. These services can get nearly anything done---for a much smaller number of folks. M is a hybrid. It’s a virtual assistant powered by artificial intelligence as well as a band of Facebook employees, dubbed M trainers, who will make sure that every request is answered.
We start capturing all of your intent for the things you want to do.
David Marcus Facebook’s goal is to make Messenger the first stop for mobile discovery. Google has long had search locked up on the desktop: Right now, if I’m looking to treat my summer cold, and I’m in front of my laptop, I begin by googling “cold meds Upper West Side.” On mobile, however, I may pull up any number of apps--Google, Google Maps, Twitter--to find that out, or I may just ask Siri. Facebook starts at a disadvantage on mobile because it doesn’t have its own operating system, and therefore users must download an app, and then open it. Marcus hopes to make up for that by creating a virtual assistant so powerful, it’s the first stop for anyone looking to do or buy anything.
“We start capturing all of your intent for the things you want to do,” says Marcus. “Intent often leads to buying something, or to a transaction, and that’s an opportunity for us to [make money] over time.” If M can provide a more efficient service than its competitors, Facebook can boost the number of people using it on mobile, and eventually spur revenue from their transactions. That’s the kind of win-win Marcus was brought in to accomplish at Facebook, which in June 2014 hired him away from PayPal, where he had been CEO. In less than two years, Facebook has more than tripled Messenger’s users to 700 million.
To try the new service, users will tap a small button at the bottom of the Messenger app to send a note to M, the same way they might message anyone on Facebook. M’s software will decode the natural language, ask followup questions in the message thread, and send updates as the task is completed. Users won’t necessarily know whether a computer or a person has helped them; unlike Siri and Cortana, M has no gender.
Facebook For now, M doesn’t pull from the social data Facebook collects to complete tasks. So, if you request a gift for your spouse, the service will make suggestions based only on your answers to questions it asks you and previous conversations you and M have had. Marcus says that may change “at some point, with proper user consent.” The service is free, and will be available to all Facebook Messenger users eventually.
In internal tests, Facebook employees have been using M for several weeks to do everything from organizing dinner parties to tracking down an unusual beverage in New Orleans. “An engineer went to Paris for a couple days, and his friend asked M to redecorate his desk in a French style,” Marcus says. “Twenty-four hours later, the desk was decorated with a proper napkin, baguette bread, and a beret.” One of M’s most popular requests from its Facebook employee testers: the service can call your cable company and endure the endless hold times and automated messages to help you set up home wifi or cancel your HBO.
The thing is: that’s a person on hold on your behalf. Facebook’s M trainers have customer service backgrounds. They make the trickier judgment calls, and perform other tasks that software can’t. If you ask M to plan a birthday dinner for your friend, the software might book the Uber and the restaurant, but a person might surprise your friend at the end of the night by sending over birthday cupcakes from her favorite bakery. “M learns from human behaviors,” says Marcus.
Facebook Eventually, the service might be sophisticated enough to figure this out on its own, but not soon. Right now, M trainers sit close to the engineering team inside Facebook offices. The company confirms the trainers are contractors but won't say how many there are. Marcus anticipates that over time, Facebook will employ thousands of them, which will represent a substantial economic investment.
The company anticipates the cost will be offset by the revenue growth it is able to realize by capitalizing on M’s interactions.
As WIRED’s Cade Metz explains , Facebook plans to use data generated by the service to feed much more complex AI systems that can reduce the burden on the trainers.
It’s not hard to imagine the business opportunities that M could spawn. For one, should Facebook discover a business is getting lots of inbound requests, it could partner with that company to offer a more direct, efficient service over Messenger.
“If, for instance, you have a lot of calls that have to be placed by people to cable companies,” says Marcus, “That’s a pretty good signal that their customers would actually like a better way to interact with the company and maybe they should have a presence inside of Messenger directly.” Facebook is already helping firms offer customer service through Messenger. At the company’s March developer conference, Marcus announced Businesses on Messenger, a feature that allows businesses to send receipts, notify customers their packages have shipped, and provide basic customer service.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Marcus won’t offer metrics to suggest whether the feature has caught on among companies, but he says they have shown a lot of interest, and his team is beginning to work out some of the kinks. “We have a lot of threads open between businesses and people, and the engagement is very good,” says Marcus. “Now we want to open it to more businesses.” Marcus anticipates that M will expand slowly over time, but that it will eventually reach everyone. As this happens, the array of tasks it performs will certainly grow. Facebook is, by design, rolling out its new assistant in a community in which the users are demographically similar to the M trainers who will be thinking up gifts for their spouses and fun vacation destinations for them.
It’s safe to say that most of Messenger’s 700 million users around the world aren’t looking to book an Uber for a friend’s birthday party or choose between Cancun and Maui for February break. Will M be as good at helping users in the Bronx access food stamps? How about coming to the aid of the single mother in Oklahoma who has a last-minute childcare issue? Marcus is up for the challenge, and so, he says, is M.
Senior Writer Facebook X Topics Cortana Facebook Siri Niamh Rowe Reece Rogers Will Knight Caitlin Harrington Will Knight Amanda Hoover Amanda Hoover Christopher Beam Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
785 | 2,015 | "AI's Next Frontier: Machines That Understand Language | WIRED" | "https://www.wired.com/2015/06/ais-next-frontier-machines-understand-language" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business AI's Next Frontier: Machines That Understand Language Save this story Save Save this story Save With the help of neural networks---vast networks of machines that mimic the web of neurons in the human brain---Facebook can recognize your face.
Google can recognize the words you bark into an Android phone.
And Microsoft can translate your speech into another language.
Now, the task is to teach online services to understand natural language, to grasp not just the meaning of words, but entire sentences and even paragraphs.
At Facebook, artificial intelligence researchers recently demonstrated a system that can read a summary of The Lord of The Rings , then answer questions about the books. Using a neural networking algorithm called Word2Vec, Google is teaching its machines to better understand the relationship between words posted across the Internet---a way of boosting Google Now, a digital assistant that seeks to instantly serve up the information you need at any given moment.
Yann LeCun, who oversees Facebook's AI work, calls natural language processing "the next frontier." Working toward this same end, the AI startup MetaMind has published new research detailing a neural networking system that uses a kind of artificial short-term memory to answer a wide range of questions about a piece of natural language. According to MetaMind, the system can answer everything from very specific queries about what the text describes to more general questions like "What's the sentiment of the text?" or "What's the French translation?" The research, due to appear Wednesday at Arxiv.org , a popular online repository for academic papers, echoes similar research from Facebook and Google, but it takes this work at step further.
"This is a very hot topic, on which the authors of this paper approach or pass the state-of-the-art results on several benchmarks," says Yoshua Bengio , a professor of computer science at the University of Montreal who specializes in artificial intelligence and has reviewed the MetaMind paper. "Their architecture is also interesting in that it is aiming at something potentially very ambitious, trying to sequentially parse a large amount of facts---hopefully one day the whole of Wikipedia and more---in such a way, via a learned semantic representation, that one can answer questions about them." MetaMind Typically referred to as "deep learning," modern neural networking algorithms are so powerful in part because they can handle so many different tasks. Other researchers are using these same algorithms to improve autonomous vehicles and build robots that can learn to screw a cap on a bottle. According to Google engineer Jeff Dean, the company's neural networking systems are driving dozens of its online services across the company , from Google+ to Google Now to Street View. With its paper, MetaMind shows how effective these algorithms can be when applied to a wide range of natural language tasks. "That is precisely what makes the beauty and the interest and importance of machine learning," Bengio says. "It is about generic ways to learn tasks." MetaMind, which builds deep learning systems for other businesses, describes what's called a Dynamic Memory Network. On one level, it mirrors work from Facebook , providing a way for machines to answer questions about what's said in a particular piece of text. Socher and company have demonstrated their Q&A model on the same dataset as Facebook's system. "This is similar to web search," Socher says, "except you give the actual answer rather than just a bunch of links." According to the paper, you can feed the system the following piece of text: Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Jane went to the hallway.
Mary walked to the bathroom.
Sandra went to the garden.
Daniel went back to the garden.
Sandra took the milk there.
* And when you ask "Where is the milk?," it will respond: "garden." At the same time, the system can judge sentiment---that is, the general feeling the words express. It can identify parts of speech. It can determine the referent of a particular pronoun. And it can translate from one language to another. Basically, the system treats these tasks as additional questions that need answering. Is the text positive or negative? What are the parts of speech? What does "their" or "that' or 'him' refer to? What is the French translation the entire text? MetaMind "The insight---and it's almost trivial---is that every task in NLP is actually a question-and-answer task," says Metamind co-founder and CEO Richard Socher, whose Stanford University PhD focused on machine learning, computer vision, and natural language processing.
The system does all this using what Socher calls "episodic memory." If a neutral network is analogous to the cerebral cortex---the means of processing information---its episodic memory is something akin to hippocampus, which provides short-term memory in humans. In the example of the garden and the the milk, the system must "remember" that Daniel is in the garden before determining where the milk is. "You can't do transitive reasoning without episodic memory," Socher says.
And, he explains, you can use much the same setup to do analyze sentiment or translate words into a new language. "One model---one dynamic memory network---can solve these very different problems," he says. Bengio points out that MetaMind actually trained a slightly different model for each of the tasks. But the ultimately aim is to unify all those tasks. It's another step into the new frontier.
Senior Writer X Topics deep learning Enterprise neural networks Will Knight Reece Rogers Will Knight Will Knight Reece Rogers Will Knight Steven Levy Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
786 | 2,015 | "'Deep Learning' Will Soon Give Us Super-Smart Robots | WIRED" | "https://www.wired.com/2015/05/remaking-google-facebook-deep-learning-tackles-robotics" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business 'Deep Learning' Will Soon Give Us Super-Smart Robots Yann LeCun.
Josh Valcarcel/WIRED Save this story Save Save this story Save Yann LeCun is among those bringing a new level of artificial intelligence to popular internet services from the likes of Facebook, Google, and Microsoft.
As the head of AI research at Facebook , LeCun oversees the creation of vast "neural networks" that can recognize photos and respond to everyday human language. And similar work is driving speech recognition on Google's Android phones , instant language translation on Microsoft's Skype service , and so many other online tools that can "learn" over time. Using vast networks of computer processors, these systems approximate the networks of neurons inside the human brain, and in some ways, they can outperform humans themselves.
This week in the scientific journal Nature , LeCun---also a professor of computer science at New York University---details the current state of this "deep learning" technology in a paper penned alongside the two other academics most responsible for this movement: University of Toronto professor Geoff Hinton , who's now at Google, and the University of Montreal's Yoshua Bengio. The paper details the widespread progress of deep learning in recent years, showing the wider scientific community how this technology is reshaping our internet services---and how it will continue to reshape them in the years to come.
But as LeCun tells WIRED, deep learning will also extend beyond the internet, pushing into devices that can operate here in the physical world---things like robots and self-driving cars. Just last week, researchers at the University of California at Berkeley revealed a robotic system that uses deep learning tech to teach itself how to screw a cap onto a bottle.
Early this year, big-name chip maker Nvidia and an Israeli company called Mobileye revealed that they were developing deep learning systems that can help power self-driving cars.
LeCun has been exploring similar types of "robotic perception" for over a decade, publishing his first paper on the subject in 2003. The idea was to use deep learning algorithms as a way for robots to identify and avoid obstacles as they moved through the world---something not unlike what's needed with self-driving cars. "It's now a very hot topic," he says.
Yes, Google and some many others have already demonstrated self-driving cars. But according to researchers, including LeCun, deep learning can advance the state of the art---just as it has vastly improved technologies such as image recognition and speech recognition. Deep learning algorithms date back to the 1980s, but now that they can tap the enormously powerful network of machines available to today's companies and researchers, they provide a viable way for systems to teach themselves tasks by analyzing enormous amounts of data.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "This is a chance for us to change the model of learning from very shallow, very confined statistics to something extremely open-ended," Sebastian Thrun, who helped launched the Google self-driving car project, said of deep learning in an interview this past fall.
Thrun has left Google, but odds are, the company is already exploring the use of deep learning techniques with its autonomous cars (the first of which are set to hit the road this summer). According to Google research fellow Jeff Dean, the company is now using these techniques across dozens of services, and self-driving cars, which depend so heavily on image recognition, are one of the more obvious applications.
Trevor Darrell, one of the researchers working on deep learning robots at Berkeley, says his team is also exploring the use of the technology in autonomous automobiles. "From a researchers perspective, their are many commonalities in what it takes to move an arm to insert a peg into a hole and what it takes to navigate a car or a flying vehicle through an obstacle course," he says.
Deep learning is particularly interesting, he says, because it has transformed so many different areas of research. In the past, he says, researchers used very separate techniques for speech recognition, image recognition, translation, and robotics. But now one this one set of techniques---though a rather broad set---can serve all these fields.
The result: all of these fields are suddenly evolving at a much faster rate. Face recognition has hit the mainstream. So has speech recognition. And the sort of autonomous machines his team is working on, Darrell says, could reach the commercial market within the next five years. AI is here. But it will soon arrive in a much bigger way.
Senior Writer X Topics deep learning Enterprise Facebook robots Self-Driving Cars Khari Johnson Steven Levy Will Knight Paresh Dave Gregory Barber Niamh Rowe Will Knight Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
787 | 2,014 | "Why Facebook Has Entrusted Its Future to the CEO of PayPal | WIRED" | "https://www.wired.com/2014/11/on-david-marcus-and-facebook" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Jessi Hempel Business Why Facebook Has Entrusted Its Future to the CEO of PayPal David Marcus Tobias Hase/dpa/Corbis Save this story Save Save this story Save One evening last May, Mark Zuckerberg invited David Marcus to dinner.
It wasn't the first time PayPal's soft-spoken French CEO had been over to Zuckerberg's house for a meal, and Marcus figured it was just another social-business kind of thing. But he hardly had time to dig into his salmon before Zuckerberg began the hard sell. Facebook's single-minded founder explained how important the social network would be in the years to come, and what a big part messaging would play in its evolution. Then Zuckerberg made his pitch: Come run Facebook Messenger.
He was offering Marcus an enormous job. It's no exaggeration to say that Facebook's future depends on the success of its mobile messaging application. Messaging is a modern version of the social graph---the web of social relationships that Zuckerberg first set out to map with Facebook. And in a recent public Q&A with users , Zuckerberg explained that it's "one of the few things that people actually do more than social networking." The company that controls the messaging platform will control the future of the way we interact with people and, quite possibly, with businesses.
The trouble is that, in this all-important race, Facebook was behind from the start. Because Zuckerberg was slow to figure out a mobile strategy, the company's messaging feature was eclipsed by fast and simple messaging apps like Snapchat, Viber, and WhatsApp , which have exploded in popularity, letting people text, talk, and share pictures and videos via their phonebooks.
>The company that controls messaging will control the future of the way we interact with people.
To better compete, Apple and Google remade the texting tools inside their mobile operating systems, transforming them into something more like the Snapchats and the Vibers. But Facebook doesn't have a mobile operating system, so it had to depend on users to download the Facebook app. Even then, messaging was a feature buried within the cluttered app, its icon the size of a pinhead. Sure, Facebook had launched a separate Messenger app, but few people had bothered to download it.
Facebook Messenger Facebook Then last February, Facebook agreed to pay $19 billion for the fastest growing of these apps, WhatsApp, and many people understood it to be Facebook's tacit recognition that Messenger didn't work. The services, however, had very little overlap. Most of WhatsApp's 450 million users were overseas, and they took to the app to get around paying for pricey telephony plans. Meanwhile, in the lucrative North American market, Facebook was still behind.
That's why Zuckerberg invited Marcus to dinner. A serial entrepreneur, Marcus, 41, sold a payments business to eBay’s PayPal in 2011, and soon after, he became CEO. This appealed to Zuckerberg. With Instagram's success, Facebook had proven it could buy a startup and nurture it, allowing it to flourish while benefiting from the Facebook-size resources---legal and infrastructures and spam prevention ---that go with building a billion-person service. Similarly, Zuckerberg planned for Messenger to operate as its own startup within Facebook, and its leader to have complete control over the product. With his killer combination of entrepreneurial skills, larger company know-how, and a payments background to boot, Marcus was just the guy to run it.
Marcus agreed to think about it. His head was spinning as he walked to his car. But Zuckerberg followed up the next day with a lengthy email detailing his vision for the business, and the two met several more times. Then, in a move that sent a ripple through Silicon Valley, Marcus left his prestigious CEO position supervising 15,000 people to run a tiny division of Facebook, overseeing less than 100.
He could see something Silicon Valley's analysts could not: the numbers. After a series of changes to Facebook Messenger in recent months---and the company's decision to push people onto the service by shutting off messaging in the primary Facebook app---the numbers were ballooning. On Tuesday, speaking at the Techonomy conference outside of San Francisco in his first public appearance as a Facebooker, Marcus will announce that Messenger now has 500 million monthly active users, a jump of 150 percent in a year. Says Marcus: "We're going for a billion." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But that's only part of his mission. The ultimate aim is to turn Messenger into something far more than a messenger.
Marcus and I are seated at the Alki bakery in the baggage claim of the Seattle airport, along with his colleague Peter Martinazzi, a lanky engineer with cartoonishly expressive eyebrows. The two of them are in town for the day to meet with the Voice-over-IP (VOIP) team responsible for building Messenger into a service you can also use to make telephone calls. It's just one way that the company hopes to use the messaging app as a platform for much bigger things, including, it seems, online payments.
The VOIP project is familiar ground for Marcus. With more than two decades of experience crafting mobile platforms, he has been working on these kinds of telephony problems since before we could ever imagine we'd one day send messages and symbols on the go from our palms. At 23, he started a Swiss telecom operator called GTN Telecom, which provided local and long distance calling as well as internet access. He sold it to the large global competitor World Access four years later.
>VOIP is just one way that the company hopes to use the messaging app as a platform for much bigger things, including online payments.
But it's also telling that Marcus knows payments, something that Zuckerberg has indicated will be a part of the messaging future. In 2000, Marcus started Echovox, which set out to help large European companies connect with and make money off of mobile audiences. His next company, Zong, began its life as a spinout of Echovox, offering a mobile payments platform that let you pay for items online via direct billing to your mobile phone. In its height, it had access to 3.2 billion mobile users, and gained attention here in the US because it worked closely with Facebook to sell the social network's virtual currency over the phone.
When PayPal bought Zong in 2011, he became a vice president. Then, in early 2012 when Scott Thompson left PayPal to run Yahoo, Marcus rather abruptly became the PayPal CEO. Many saw his arrival as a clear sign that PayPal was taking a more entrepreneurial approach. Under his leadership, PayPal launched its offline mobile card reader, PayPal Here. But Marcus found the challenges of managing a massive workforce less satisfying. As he says: "It wasn't a creative thing to do. You were fixing things rather than building things." On June 9, Marcus announced his new role via a Facebook post. Then he got in his car and drove the 17 miles from PayPal's San Jose headquarters to Menlo Park for an all-hands meeting with the Messenger team. If the mood was celebratory, it didn't last long. A few senior members of the team brought him into a conference room where Martinazzi jumped into action. Recalling the day, Marcus looks over at Martinazzi, who raises his eyebrows and smiles. "Peter was like: 'I'll send you 15 presentations and invite you to ten groups and can you come back tomorrow?" Marcus remembers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In the Seattle airport, Martinazzi glances down and shrugs. "We had a lot to do," he says.
That's been the story of Messenger from the beginning. You can trace its evolution to 2011, when Facebook paid a reported $40 million for the group messaging app Beluga, recruiting its founding trio of former Google engineers.
By the end of the year, it had launched standalone messenger apps for Apple iOS and Android devices. But the messaging tool had remained locked in the netherworld of lost communications---an email product that wasn't as good as email and an instant messaging product that wasn't embraced widely enough to be useful. Eighteen months after the app was released, only 10-20 percent of Facebook's active users (which today number 1.3 billion) had downloaded it.
>The growth team is the equivalent of Facebook's Navy SEALs.
That's when Facebook head of growth Javier Olivan stepped in. The growth team is the equivalent of Facebook’s Navy SEALs, a special operations force brought in when the potential for a feature to take off is great and the stakes are high. Most of Facebook's development teams interact with Olivan's team at some point. As competing messaging applications gained steam, Olivan dug into the data to see how Facebook users behaved.
"We saw that when you compare the way users message when they use the app vs. inside the Facebook app, the engagement was different," Olivan explains. "The patterns of use were different." In other words, when people had the app, they messaged more. Olivan had to figure out how to spur more people to download the app.
Eventually, he pulled out the Messenger engineering teams---at that time, about ten folks on Apple's iOS and ten folks on Android---and moved them to his building under the leadership of Martinazzi. The app was now considered a separate product. The team rebuilt it entirely, using native components and tricking it out with many of its competitors' most popular features.
They built in the ability for users to sync with their phone books and message anyone, even people who weren't on Facebook. And they added a group chat feature as well as a "Like" button that can expand when you hold it down (in case you really like something).
The smiley face on the bottom of the screen produces pages of stickers, all sourced from artists by a former game designer. Next to that icon, a microphone lets users record and send sounds. A small phone icon in the top right corner lets users make calls over the internet directly from the app.
>Despite these improvements, many of Facebook's diehard users didn't download the app.
The Messenger team did a lot of work on the backend as well. Originally, Messenger used the same code that supported the messaging feature within Facebook's main app. "We've revamped the way the servers and the clients talk to each other entirely," says Martinazzi. "Now the app uses less data and the messages get there faster, which is really important when it's my phone and I'm on a limited data plan." Facebook Messenger Facebook But despite these improvements, many of Facebook's diehard users didn't download the app. They already had the communications tools they needed on their phones, and if they really needed to check a Facebook message, they could still access it through Facebook proper. So last April, Facebook experimented with cutting off the ability for users to message in its core app.
It first experimented with this in several countries in Europe where Facebook's messaging was very popular. The experiment worked, and Facebook saw an uptick in engagement. So the company decided to cut off messaging in its main app for everyone. Then came the backlash.
David Marcus didn't have a hand in the decision to kill messaging in Facebook proper, but he thinks the move was critical. "Adults don't download apps anymore," he says, the faintest hint of a Swiss French accent slipping into his sentences. "So if we didn't do this, there's no way people would give it a try." But the move did piss a lot of people off. The week before Marcus officially joined Facebook, the company started switching Messenger off for sections of its North American audience, and people grumbled. A lot. Privacy activists complained that the new app required users to set their settings all over again and that it defaulted to settings that identified too much personal information.
>'If we didn't do this, there's no way people would give it a try.' Users began citing and spreading misinformation such as rumors Facebook had permission to keep your camera turned on all the time, spying on you. An outdated Huffington Post article began recirculating, advancing the idea that the Android version of Messenger requested outrageous permissions.
Call it the burden of being Facebook. Because the social network took an early and aggressive position on privacy (betting that it would matter less to people over time), it picked up a reputation for compromising the privacy of its users. As the social network matured, the company has been aggressive about offering up the controls its users request. But a reputation, as anyone knows, can be hard to shake.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In truth, the permission requests that the Huffington Post piece called out were neither more onerous nor less common than those similar messaging apps require, and not all that different than the main Facebook app. It's the nature of the relationship consumers strike with app makers when these apps are released: in order for them to do what we want them to do, we have to give up some control over our own experience, providing them permission.
But if Facebook often confronts similar issues in product rollouts (Instagram had a backlash over revised privacy terms early in its life), it has also learned quite a bit about what happens next. To wit: people use the product. In less than six months, Facebook has more than doubled the number of active users on Messenger. What's more, as the users discover what Messenger can do, they are using it more frequently.
This is Facebook's lifeblood. Gartner analyst Brian Blau says that for Facebook to have power, people need to use it, spending time on it and inputting their data. "The only way to do that is to have [users] constantly on the network, connecting into Facebook," he explains.
To be sure, some users are still annoyed. A quick survey of my social network turned up complaints of the Facebook-is-overlord variety, general privacy concerns reflecting misperceptions about how privacy settings work, and the feedback that Facebook was trying to win over users for a product users didn't need.
As my sister-in-law wrote in a Facebook post: "I absolutely hate it. One centralized app where you can do everything you need to do is so much more efficient from a user perspective and a phone/memory one too." >'We want to give people tools that enable them to express themselves better.' If Marcus is successful, however, everyone will feel they need it. "It's really hard to express emotion across a texting interface, and so we want to give people tools that enable them to express themselves better," he tells me. Just as importantly, he wants the application to work---fast---for every type of message every single time.
Marcus then whips out his phone to show me a feature in development, which will improve the dependability of the product. He sends a message to Martinazzi, and a small blue dot pops up, indicating that Martinazzi has received this message. Beside him, Martinazzi responds to the ding. Once he has read the message, Marcus' blue dots are replaced by a miniscule Martinazzi chat head.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The plus? You can tell when someone has gotten your message, and whether she has read it. "We're going to be doing a lot more things like this," Marcus says. His bet is that if Messenger becomes the most predictable way to communicate, even my sister-in-law will let bygones be bygones and download the app---eventually.
So far, the Messenger team has not been focused on making money, but it's not a stretch to imagine how Messenger could become profitable. In Facebook’s second quarter earnings call, Zuckerberg indicated there would be an overlap between Messenger and payments, but that it was a long way out. Leaked snapshots obtained by a tech blogger suggest Facebook has experimented with a service that let's friends make payments to each other.
That's not the only way to make money off the service. Already, Facebook has done some advertising-related experiments. When Despicable Me 2 came out in theaters last year, Facebook worked up a partnership that let users download Minion stickers. It's easy to imagine a future strategy for making money off stickers.
>Marcus wants to reinvent messaging between people and businesses, so that it's useful to both parties.
Marcus has even grander ambitions. In Facebook's earliest days, Zuckerberg introduced the ambitious idea that banner ads would be replaced by ads so compelling to users that they would function as content. When I first spoke to him about this in 2005, it seemed insane. But last year, Facebook brought in $7.9 billion in sales, capturing nearly six percent of the global display advertising market with its social ads.
In similar fashion, Marcus wants to reinvent messaging between people and businesses, so that it's useful to both parties. "It's really broken," he tells me, as we are wrapping up our airport coffee.
I agree with him. Who doesn't hate spam mail? But I can't imagine how it could be better. "What do you mean?" I ask.
"Well, what airline are you flying today?" he says.
"United," I reply.
"Have you ever looked forward to calling United?" he asks with a slight smile.
The question hangs between us as we reach for our bags. What would United be willing to pay Facebook to communicate better with me? For that matter, what would it take for me commit to using Facebook to message? And can Marcus convince all of us to double down on Messenger along with him? A significant part of Facebook’s future may be riding on it.
Senior Writer Facebook X Topics Facebook Mark Zuckerberg WhatsApp Kari McMahon Andy Greenberg Khari Johnson Will Knight David Gilbert Joel Khalili David Gilbert Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
788 | 2,014 | "An App That Uses AI to Pick Outfits for You | WIRED" | "https://www.wired.com/2014/12/styleit-app-machine-learning" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Davey Alba Business An App That Uses AI to Pick Outfits for You Screenshot: StyleIt Save this story Save Save this story Save Henry Kang traces the roots of his company to a rather typical moment he shared with his wife, Shawna. She was getting dressed one morning, and she asked him that all-too-familiar question: "What can I wear with this?" Kang immediately remembered a silly scene from the movie Clueless , where the film's protagonist, Cher, faces the same question. Rather than ask someone else, she turns to her computer, using it mix and match the different pieces in her wardrobe. "I thought: 'Totally. A computer can help someone put together an outfit,'" says Kang, who carries a PhD in robotics and computer science from Carnegie Mellon University.
But this wasn't the '90s. He didn't turn to a desktop machine like the one in Cher's bedroom. He turned to the iPhone, creating a mobile app that could do the job. It's called StyleIt.
You upload a photo of a shirt or jacket or a pair of pants to the app, and using computer vision and machine learning, it tells you what to wear with it.
Cher would love it---particularly because, as of this month, it also lets you instantly purchase items that StyleIt recommends, taking advantage of Apple's new mobile payment system, Apple Pay.
The app is part of a much larger movement towards mobile shopping. According to one report study , 70 percent of consumers have bought something using their smartphone in the last six months, up from 59 percent in 2013. And like many other tools, StyleIt aims to make this mobile shopping more, well, personal.
Over time, the app "learns" your preferences, much like TheTake , a recently-launched app that pinpoints products in movie scenes and lets you instantly buy them.
According to Kang, StyleIt has already curated more than 1.5 million outfits and has indexed more than 1 million items from 450 stores, including Forever21, J Crew, Tory Burch, H&M, and Urban Outfitters. It refreshes product information from this database of stores every 20 minutes.
To match outfits, Kang says, StyleIt then pulls information from fashion bloggers and sites like Polyvore. As he describes it, the app can recognize colors, textures and patterns, and based on what the users has liked in the past, it uses predictive modeling to its personalize suggestions.
Kang and his team of engineers---over half of whom are Carnegie Mellon-trained computer scientists---are confident their machine learning system can reliably match clothes to taste. After all, Kang says, it works for his wife. "She loves it. We need to take another look at our credit card bill." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Topics Apps computer vision machine learning Mobile Kari McMahon Will Knight Amit Katwala Joel Khalili Andy Greenberg David Gilbert Amit Katwala David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
789 | 2,014 | "New Startup Sets Out to Bring Google-Style AI to the Masses | WIRED" | "https://www.wired.com/2014/12/new-startup-sets-bring-google-style-ai-masses" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business New Startup Sets Out to Bring Google-Style AI to the Masses human and technology Getty Images Save this story Save Save this story Save Richard Socher carries a resume that would seem to make him rather attractive to the giants of the internet.
He just finished a PhD at Stanford University, where he explored a form of artificial intelligence called "deep learning," teaching machines to recognize images and understand natural language using software that operates a bit like the networks of neurons in the human brain. In recent years, the giants of net---including Google, Facebook, Microsoft, and Baidu---have seized on deep learning as a path to the future of automated computer systems, and they've been hiring researcher after researcher from the relatively small community of academics that specialize in this rather complicated technology.
Richard Socher.
MetaMind Socher says the big names have knocked at his door---"I had some very, very attractive offers"---but he turned them down. He wanted to start his own company, a company that would build deep learning technologies anyone can use, not just the internet giants. That company is called MetaMind, and it's backed by $8 million in funding from Saleforce.com CEO Marc Benioff and big-name venture capital fund Khosla Ventures, with Khosla's resident chief technology officer, Sven Strohband, serving as the new company's CEO.
"They're doing some amazing work---Google and Microsoft and Facebook and so on---and their work is impacting a lot of people," Socher says. "But I felt like there's a lot more potential if you give those tools to the remaining Fortune 500 companies---or to people on the internet, just to let them play with them on their own." MetaMind is just four months old, but its website , launched today, provides a taste of the technology the company will offer to businesses large and small---not to mention anyone else on the net. You can see how its deep learning tools can, say, recognize particular images or understand the meaning of particular sentences. If you drag and drop a few chocolate-chip-cookie photos onto one MetaMind tool, it can then automatically identify other images of chocolate-chip cookies. If you type "bald man on a horse," it can show you images of bald men on horses.
These are neat party tricks. But if used on a larger scale, this kind of thing can be remarkably effective inside online businesses---i.e. practically any business. It's certainly useful to Google and Facebook---they're using similar deep learning technology to better understand search queries and identify images on their own online services---and Socher says MetaMind is already working with a wide range of businesses, including everything from companies with an interest in identifying food photos to medical outfits looking to automatically examine things like body scans and X-rays.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The startup is just one of many created to bring this type of advanced artificial intelligence to the larger online universe.
Though several have been bought up by the likes of Google, Facebook, Yahoo, and Twitter, others remain independent, most notably a startup called Clarifai.
But whereas Clairfai focuses on image search, MetaMind aims to offer a rather broad set of tools, including natural language processing. At Stanford, Socher specialized this field, striving to build systems that can understand not just words but sentences or even entire paragraphs.
The jury is still out on MetaMind's particular tools. But the company has at least pinpointed the important area of research. Though many companies have deployed deep learning tools that recognize images and speech, the "next frontier" is a breed of computer system that can truly understand language, says Yann LeCun, the deep learning founding father who now runs Facebook's AI lab.
Sven Strohband.
MetaMind Yes, he explains, tools like Siri and Google Now can understand words you say, but not necessarily the meaning of those words. The hope is that deep learning will help drive machines that can learn to understand language as they go along. One of the technology's key attributes is its ability to train itself on certain tasks, and this, LeCun says, is where many believe it can help with natural language processing.
This is the kind of thing being explored by another MetaMind tool, where you can type two sentences and it will tell you how similar they are. If you key in "surfers ride big waves" and "big waves are ridden by surfers," it will tell you they mean the same thing. It's the sort of technology businesses can use to, say, automatically answer questions from its customers. "A customer can ask a question a myriad of ways---even though they all mean the same things," says Socher. Or it could help analyze what customers are saying about a company on social networking services like Twitter.
MetaMind---which currently spans only 10 employees---will act as a kind of deep learning consultant, but it will also offer its own deep learning services and software to businesses. Running across hundreds of machines loaded with tens of thousands of graphics processors , its online service will let business run deep learning tasks without setting up their own hardware. But if a business prefers to run their own deep learning systems, MetaMind will provide the software---and the expertise---needed to do so.
The company's pitch is rather broad. It's positioning itself as a catch-all deep learning company, and in all likelihood, it will hone its efforts in the coming months. But for Adam Gibson, the brains behind another deep learning startup called SkyMind , MetaMind is an outfit worth following, mainly because of Socher's previous work. "They will occupy a niche," he says, "if only because Richard knows what he's doing." Senior Writer X Topics deep learning Enterprise Facebook Google Will Knight Will Knight Christopher Beam Will Knight Susan D'Agostino Niamh Rowe Steven Levy Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
790 | 2,012 | "If Xerox PARC Invented the PC, Google Invented the Internet | WIRED" | "https://www.wired.com/wiredenterprise/2012/08/google-as-xerox-parc" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business If Xerox PARC Invented the PC, Google Invented the Internet Save this story Save Save this story Save The truth about Jeff Dean appeared on April Fool's Day 2007.
Somewhere inside Google, a private website served up a list of facts about Dean, one of Google's earliest employees and one of the main reasons the web giant handles more traffic than any other operation on the net. The site was only available to Googlers, but all were encouraged to add their own Jeff Dean facts. And many did.
"Jeff Dean once failed a Turing test when he correctly identified the 203rd Fibonacci number in less than a second," read one.
"Jeff Dean compiles and runs his code before submitting," read another, "but only to check for compiler and CPU bugs." "The speed of light in a vacuum used to be about 35 mph," said a third. "Then Jeff Dean spent a weekend optimizing physics." No, these facts weren't really facts. But they rang true. April Fool's Day is a sacred occasion at Google, and like any good April Fool's joke, the gag was grounded in reality. A Google engineer named Kenton Varda set up the website, playing off the satirical Chuck Norris facts that so often bounce around the net, and when he mailed the link to the rest of the company, he was careful to hide his identity. But he soon received a note from Jeff Dean, who had tracked him down after uncovering the digital footprints hidden in Google's server logs.
Inside Google, Jeff Dean is regarded with awe. Outside the company, few even know his name. But they should. Dean is part of a small group of Google engineers who designed the fundamental software and hardware that underpinned the company's rise to the web's most dominant force, and these creations are now mimicked by the rest of the net's biggest names – not to mention countless others looking to bring the Google way to businesses beyond the web.
>"Google did a great job of slurping up some of the most talented researchers in the world at a time when places like Bell Labs and Xerox PARC were dying. It managed to grab not just their researchers, but their lifeblood." \- Mike Miller Time and again, we hear the story of Xerox PARC , the Silicon Valley research lab that developed just about every major technology behind the PC revolution, from the graphical user interface and the laser printer to Ethernet networking and object-oriented programming. But because Google is so concerned with keeping its latest data center work hidden from competitors – and because engineers like Jeff Dean aren't exactly self-promoters – the general public is largely unaware of Google's impact on the very foundations of modern computing. Google is the Xerox PARC of the cloud computing age.
"Google did a great job of slurping up some of the most talented researchers in the world at a time when places like Bell Labs and Xerox PARC were dying," says Mike Miller, an affiliate professor of particle physics at the University of Washington and the chief scientist of Cloudant , one of the many companies working to expand on the technologies pioneered by Google. "It managed to grab not just their researchers, but also their lifeblood." These Google technologies aren't things you can hold in your hand – or even fit on your desk. They don't run on a phone or a PC. They run across a worldwide network of data centers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg They include sweeping software platforms with names like the Google File System , MapReduce , and BigTable , creations that power massive online applications by splitting the work into tiny pieces and spreading them across thousands of machines, much like micro-tasks are parceled out across a massive ant colony. But they also include new-age computer servers, networking hardware, and data centers that Google designed to work in tandem with this software. The idea is to build warehouse-sized computing facilities that can think like a single machine. Just as an ant colony acts as one entity, so does a Google data center.
While Silicon Valley stood transfixed by social networks and touch screens, Google remade the stuff behind the scenes, and soon, as the other giants of the web ran into their own avalanche of online data, they followed Google's lead. After reinventing Google's search engine, GFS and MapReduce inspired Hadoop , a massive number-crunching platform that's now one of the world's most successful open source projects. BigTable helped launch the NoSQL movement , spawning an army of web-sized databases. And in so many ways, Google's new approach to data center hardware sparked similar efforts from Facebook, Amazon, Microsoft, and others.
To be sure, Google's ascendance builds on decades of contributions from dozens of equally unheralded computer scientists from many companies and research institutions, including PARC and Bell Labs. And like Google, Amazon was also a major influence on the foundations of the net – most notably through a research paper it published on a file system called Dynamo. But Google's influence is far broader.
The difference between it and a Xerox PARC is that Google profited mightily from its creations before the rest of the world caught on. Tools like GFS and MapReduce put the company ahead of the competition, and now, it has largely discarded these tools, moving to a new breed of software and hardware. Once again, the rest of the world is struggling to catch up.
Google's Twin Deities Kenton Varda could have targeted several other Google engineers with his April Fool's Day prank. Jeff Dean just seemed like "the most amusing choice," Varda remembers. "His demeanor was perhaps the furthest from what you'd expect in a deity." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The obvious alternative was Sanjay Ghemawat, Dean's longtime collaborator. In 2004, Google published a research paper on MapReduce, the number-crunching platform that's probably the company's most influential data center creation, and the paper lists two authors: Dean and Ghemawat. The two engineers also played a major role in the design of the BigTable database. And Ghemawat is one of three names on the paper describing the first Google File System, a way of storing data across the company's vast network of data centers.
Even for Varda, who works on the team that oversees Google's infrastructure, the two engineers are difficult to separate. "Jeff and Sanjay worked together to develop much of Google's infrastructure and have always seemed basically joined at the hip," says Varda. "It's often hard to distinguish which of them really did what.
>"All code changes at Google require peer review prior to submission, but in Jeff and Sanjay's case, often one will send a large code review to the other, and the other will immediately 'LGTM' it, because they wrote the change together in the first place." \- Kenton Varda "All code changes at Google require peer review prior to submission, but in Jeff and Sanjay's case, often one will send a large code review to the other, and the other will immediately 'LGTM' it, because they wrote the change together in the first place." LGTM is Google-speak for "looks good to me." Varda means this quite literally. Over the years, Dean and Ghemawat made a habit of coding together while sitting at the same machine. Typically, Ghemawat does the typing. "He's pickier about his spacing," Dean says.
The two met before coming to Google. In the '90s, both worked at Silicon Valley research labs run by the Digital Equipment Corporation, a computing giant of the pre-internet age. Dean was at DEC's Western Research Lab in Palo Alto, California, and Ghemawat worked two blocks away, at a sister lab called the Systems Research Center. They would often collaborate on projects, not only because Dean had a thing for the gelato shop that sat between the two labs, but because they worked well together. At DEC, they helped build a new compiler for the Java programming language and a system profiler that remade the way we track the behavior of computer servers.
They came to Google as part of a mass migration from DEC's research arm. In the late-'90s, as Google was just getting off the ground, DEC was on its last legs. It made big, beefy computer servers using microprocessors based on the RISC architecture, and the world was rapidly moving to low-cost machines equipped with Intel's x86 chips. In 1998, DEC was acquired by computer giant Compaq. Four years later, Compaq merged with HP. And the top engineers from DEC's vaunted research operation gradually moved elsewhere.
"DEC labs were going through a bit of rocky period after the Compaq acquisition," Dean says, "and it wasn't exactly clear what role research would have in the merged company." Some engineers went to Microsoft, which was starting a new research operation in Silicon Valley. Some went to a Palo Alto startup called VMware, whose virtual servers were about to turn the data center upside-down.
And many went to Google, founded the same year DEC was acquired by Compaq.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It was a time when several of the tech world's most influential research labs were losing steam, including Xerox PARC and Bell Labs, the place that produced such important technologies as the UNIX operating system and the C programming language.
But although these labs had already seen their best days, many of their researchers would feed a new revolution.
"At the time of the bubble burst in 2001, when everyone was downsizing, including DEC, the main two high-tech companies that were hiring were Google and VMware," says Eric Brewer, the University of California at Berkeley computer science professor who now works alongside Dean and Ghemawat. "Because of the crazy lopsidedness of that supply and demand, both companies hired many truly great people and both have done well in part because of that factor." >"At the time of the bubble burst in 2001, when everyone was downsizing, including DEC, the main two high-tech companies that were hiring were Google and VMware." \- Eric Brewer Like Dean and Ghemawat, several other engineers who arrived at Google from DEC would help design technologies that caused a seismic shift in the web as a whole, including Mike Burrows, Shun-Tak Leung, and Luiz André Barroso.
At the time, these engineers were just looking for interesting work – and Google was just looking for smart people to help run its search engine. But in hindsight, the mass migration from DEC provides the ideal metaphor for the changes Google landed on the rest of the world.
DEC was one of the first companies to build a successful web search engine – AltaVista, which came out of the Western Research Lab – and at least in the beginning, the entire thing ran on a single DEC machine.
But Google eclipsed AltaVista in large part because it turned this model on its head. Rather than using big, beefy machines to run its search engine, it broke its software into pieces and spread them across an army of small, cheap machines. This is the fundamental idea behind GFS, MapReduce, and BigTable – and so many other Google technologies that would overturn the status quo.
In hindsight, it was a natural progression. "The architecture challenges that arise when building a data system like Google's that spans thousands of computers isn't all that different from the challenges that arise in building a sophisticated monolithic system," says Armando Fox , a professor of computer science at the University of California, Berkeley who specializes in large-scale computing. "They problems wear very similar clothing, and that's why it was essential to have people with experience at places like DEC." Jeff Dean Follows His Uncle to Google Jeff Dean was the first to arrive from DEC. He came by way of his "academic uncle," Urs Hölzle.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Hölzle was one of Google's first 10 employees, and as the company's first vice president of engineering, he oversaw the creation of the Google infrastructure , which now spans more than 35 data centers across the globe, judging from outside sources. He joined Google from a professorship at the University of California at Santa Barbara, and before that, he studied at Stanford under a prof named David Ungar , developing some of the core technologies used in today's compilers for the Java programming language.
Dean's academic adviser also studied with Ungar, and this made Hölzle his academic uncle. In 1999, with DEC in its death throes, Dean left the company for a startup called MySimon, but when he saw Hölzle turn up at Google, he sent an email looking for a new Google job of his own. He was soon hired by the same man who hired Hölzle: Google co-founder Larry Page.
At first, Dean was charged with building an ad system for Google's fledgling search engine. But after a few months, he moved onto the company's core search technologies, which were already buckling under the weight of a rapidly growing worldwide web. He was soon joined by Ghemawat, who made the move to Google in large part because Dean and other DEC researchers – Krishna Bharat and Monika Henzinger – were already on board.
"It's fairly likely that I might never have interviewed at Google if Jeff hadn't been there," Ghemawat says. They quickly picked up where they left off at DEC. Over the next three or four years, together with an ever changing group of other engineers, the two engineers designed and built multiple revisions of the company's core systems for crawling the web, indexing it, and serving search results to users across the globe.
Yes, they would often code at the same machine – while drinking an awful lot of coffee. Cappuccino is their drug of choice. Their partnership works, Dean says, because Ghemawat is more level-headed. "I tend to be very impatient, thinking about all the ways we can do something, my mind and hands spinning at a very fast rate. Sanjay gets excited, but in a more subdued way. He corrects my course, so that we end up moving in the right direction." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But Ghemawat says Dean's approach is just as important. He keeps them moving forward. "I often get down, thinking about all the different ways of doing something, worrying about the right way," Ghemawat says. "It's good to have someone with the energy and excitement needed to get to the end goal." The big breakthroughs came with the creation of the Google File System and MapReduce, which rolled out across Google's data centers in the early part of the last decade. These platforms provided a more reliable means of building the massive index that drives Google's search engine. As Google crawled the world’s webpages, grabbing info about each, it could spread this data over tens of thousands of servers using GFS, and then, using MapReduce, it could use the processing power inside all those servers to crunch the data into a single, searchable index.
>"What do you do when your job is to take the entire internet, index it, and make a copy of it – and not do it in a way that the copy is the same size as the internet? That's a pretty interesting technical challenge." \- Jason Hoffman The trick is that these platforms didn't break when machines failed or the network slowed. When you're dealing with ten of thousands of ordinary servers as Google was, machines fail all the time. With GFS and MapReduce, the company could duplicate data on multiple machines. If one broke, another was there to step in.
"The scale of the indexing work made it complicated to deal with machine failures and delays, so we started looking for abstractions that would allow for automatic parallelization across a collection of machines – to give higher performance and scalability – and could also make long-running computations that ran on thousands of machines robust and reliable," Jeff Dean says, in describing the thinking behind MapReduce. Once these tools were in place on the search engine, he explains, Google realized they could help run other web services too.
BigTable arose in similar fashion. Like MapReduce, it ran atop the Google File System, but it didn't process data. It operated as a massive database. "It manages rows of data," Dean says, "and spreads them across more and more machines as you need it." It didn't give you as much control over the data as a traditional relational database, but it could handle vast amounts of information in ways you couldn't with platforms designed for a single machine.
The same story appears again and again. As it grew, Google faced an unprecedented amount of data, and it was forced to build new software.
"What do you do when your job is to take the entire internet, index it, and make a copy of it – and not do it in a way that the copy is the same size as the internet? That's a pretty interesting technical challenge," says Jason Hoffman, the chief technology officer at cloud computing outfit Joyent.
"Very often the hammer swinger knows how to make the hammer. Most things that are innovative come from a forge. They come from those points where you're facing failure." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Data Center Empire Built on Crème Brûlée Luiz André Barroso followed Jeff Dean and Sanjay Ghemawat from DEC to Google. But he almost didn't.
Barroso had worked alongside Dean at DEC's Western Research Lab, and in 2001, he was weighing job offers from Google and VMware. After visiting and interviewing with both companies, he put together a spreadsheet listing the reasons to join each. But the spreadsheet ended in a dead heat: 122 reasons for Google, and 122 for VMware.
Then he talked to Dean, who asked whether the spreadsheet included the crème brûlée served by executive chief Charlie Ayers the day he visited Google. "Crème brûlée is his absolute favorite," Dean remembers. "I asked if he had factored it into his 122-point list, and he said: 'No! I forgot!'" Barroso accepted Google's job offer the next morning.
Barroso was unusual in that he wasn't necessarily a software engineer. At DEC, he helped pioneer multicore processors – processors that are actually many processors in one. But after Barroso briefly worked on Google software, Hölzle put him in charge of an effort to overhaul Google's hardware infrastructure, including not only its servers and other computing gear, but the data centers housing all that hardware. "I was the closest thing we had to a hardware person," Barroso remembers.
>"Crème brûlée is his absolute favorite. I asked if he had factored it into his 122-point list, and he said: 'No! I forgot!'" \- Jeff Dean Hölzle, Barroso, and their "platforms team" began by rethinking the company's servers. In 2003, rather purchase standard machines from the likes of Dell and HP, the team started cutting costs by designing their own servers and then contracting with manufacturers in Asia to build them – the same manufacturers who were building gear for the Dells and the HPs. In short, Google cut out the middle men.
Uniquely, each Google machine included its own 12-volt battery that could pick up the slack if the system lost its primary source of power. This, according to Google, was significantly more efficient that equipping the data center with the massive UPSes – uninterruptible power supplies – that typically provide backup power inside the world’s computing facilities.
Then the team went to work on the data centers housing these servers. At the time, Google merely leased data center space from other companies. But Barroso and crew started from scratch, designing and building their own data centers in an effort to save money and power, but also to improve the performance of Google's web services.
The company began with a new facility in The Dalles, Oregon, i.e. the rural area where it could tap into some cheap power – and some serious tax breaks. But the main goal was to build an entire data center that behaved like a single machine. Barroso and Hölzle call it "warehouse-scale computing." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of internet service performance, something that can only be achieved by a holistic approach to their design and deployment,” Barroso and Hölzle in their seminal 2009 book on the subject, The Datacenter as a Computer.
"In other words, we must treat the data center itself as one massive warehouse-scale computer.” They designed the facility using a new kind of building block. They packed servers, networking gear, and other hardware into standard shipping containers – the same kind used to transport goods by boat and train – and these data center "modules" could be pieced together into a much larger facility. The goal was to maximize the efficiency of each module. Apparently, the notion came to Larry Page in 2003, when he saw the Internet Achieve give a presentation on its plans for similar modules – though Barroso doesn't remember where the idea came from. "Other than it wasn’t me," he says.
The company's facility in The Dalles went live in 2005. Over the years, there were rumors of data center modules and custom servers, but the details remained hidden until 2009, when Google held a mini-conference at its Silicon Valley headquarters. In the data center, Google isn't content to merely innovate. It keeps the innovations extremely quiet until it's good and ready to share them with the rest of the world.
The Tesla Effect Larry Page has a thing for Nikola Tesla. According to Steven Levy's behind the scenes look at Google – In The Plex – Page regarded Tesla as an inventor on par with Edison, but always lamented his inability to turn his inventions into profits and long-term recognition.
Clearly, the cautionary tale of Nicola Tesla influenced the way Google handles its core technologies. It treats them as trade secrets, and much like Apple, the company has a knack for keeping them secret. But in some cases, after a technology runs inside Google for several years, the company will open the kimono. "We try to be as open as possible – without giving up our competitive advantage," says Hölzle. "We will communicate the idea, but not the implementation." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2003 and 2004, the company published papers on GFS and MapReduce.
Google let the papers speak for themselves, and before long, a developer named Doug Cutting used them to build an indexing system for an open source search engine he called Nutch. After Cutting joined Yahoo – Google's primary search rival at the time – the project morphed into Hadoop.
>"We try to be as open as possible – without giving up our competitive advantage. We will communicate the idea, but not the implementation." \- Urs Hölzle A way of crunching epic amounts of data across thousands of servers, Hadoop has long been used by the other giants of the web, including Facebook, Twitter, and Microsoft, and now, it's spreading into other businesses. By 2016, according to research outfit IDC, the project will fuel a $813 million software market.
History repeated itself with BigTable. In 2006, Google published a paper on its sweeping database, and together with an Amazon paper describing a data store called Dynamo, it spawned the NoSQL movement, a widespread effort to build databases that could scale to thousands of machines.
"If you look at every NoSQL solution out there, everyone goes back to the Amazon Dynamo paper or the Google BigTable paper," says Joyent's Jason Hoffman. "What would the world be like if no one at Google or Amazon ever wrote an academic paper?" Google's hardware operation is a slightly different story. We still know relatively little about the inside of Google's data centers, but the company's efforts to design and build its own gear has undoubtedly inspired similar efforts across the web and beyond. Facebook is now designing its own servers , server racks, and storage equipment, with help from manufacturers in Asia. According to outside sources , the likes of Amazon and Microsoft are doing much the same. And with Facebook "open sourcing" its designs under the aegis of the Open Compute Foundation, many others companies are exploring similar hardware.
What's more, modular data centers are now a mainstay on the web. Microsoft uses them, as do eBay and countless others. Mike Manos, Microsoft's former data center guru, denies that Google was the inspiration for the move to modular data center, pointing out that similar modules date back to the 1960s, but it was Google that brought the idea to forefront. As Cloudant's Mike Miller points out, GFS and MapReduce also depend on ideas from the past. But Google has knack for applying these old ideas to very new problems.
Google's Past Is Prologue The irony is that Google has already replaced many of these seminal technologies. Over the past few years, it swapped out GFS for a new platform dubbed " Colossus ," and in building its search index, it uses a new system known as Caffeine , which includes piece of MapReduce and operates in a very different way, updating the index in realtime rather than rebuilding the thing from scratch.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Google may still use data center modules in The Dalles, but it seems they no longer play a role in its newer facilities. We don't know much about what the company's now uses inside these top secret facilities, but you can bet its a step ahead of what it did in the past.
In recent years, Google published papers on Caffeine and two other sweeping software platforms that underpin its services: Pregel, a "graph" database for mapping relationships between pieces of data, and Dremel, a means of analyzing vast amounts data at super high speeds. Multiple open source projects are already working to mimic Pregel. At least one is cloning Dremel. And Cloudant's Miller says Caffeine – aka Percolator – is sparking changes across the Hadoop and NoSQL markets.
These are just the some latest creations in use at Google. No doubt, there are many others we don't know about. But whatever Google is using now, it will soon move on. In May of last year, University of California at Berkeley professor Eric Brewer announced he was joining the team building Google's "next gen" infrastructure. "The cloud is young," he said. "Much to do. Many left to reach." >"The Google infrastructure work wasn't really seen as research. It was about how do we solve the problems we're seeing." \- Sanjay Ghemawat Brewer – one of the giants of distributed computing research – is yet another sign that Google is the modern successor to Xerox PARC. But the company also takes the PARC ethos a step further.
You can trace Google's research operation through DEC, all the way back to PARC's earliest days. The DEC Systems Research Center was founded by Robert Taylor, the same man who launched the computer science laboratory at PARC.
Taylor started the SRC because he felt that by the early 80s, PARC had lost its way. "A lot of people who I worked with at PARC we as disenchanted with PARC as I was," he says. "So they joined me." He worked to build the lab in the image of the old PARC Computer Science Lab – even in terms of its physical setup – and in some ways, he succeeded.
But it suffered from the same limitations as so many corporate research operations. It took ages to get the research into the marketplace. This was also true at the DEC Western Research Lab, where Jeff Dean worked. And this is what brought him to Google. "Ultimately, it was this frustration of being one level removed from real users using my work that led me to want to go to a startup," Dean says.
But Google wasn't the typical startup. The company evolved in a way that allowed it to combine the challenge of research with the satisfaction of instantly putting the results into play. Google was a research operation – and yet it wasn't. "The Google infrastructure work wasn't really seen as research," Ghemawat says. "It was about how do we solve the problems we're seeing in production." For some, the drawback of working on Google's core infrastructure is that you can't tell anyone else what you're doing. This is one of the reasons an engineer named Amir Michael left Google to build servers at Facebook. But, yes, there are times when engineers are let loose to publish their work or even discuss it in public.
For Google, it's balancing act. Though some are critical of the particular balance, it's certainly working for Google. And there's no denying its methods have pushed the rest of the web forward. PARC never had it so good.
Senior Writer X Topics Amazon caffeine Cloud Computing data databases Dremel Enterprise Facebook Google Hardware History Infrastructure maps Microsoft networks platforms secret servers Servers software Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
791 | 2,015 | "Google's TensorFlow Alone Will Not Revolutionize AI | WIRED" | "https://www.wired.com/2015/11/tensorflow-alone-will-not-revolutionize-ai" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Erik T. Mueller Design Google's TensorFlow Alone Will Not Revolutionize AI Pattern of rows of heads John Devolle/Getty Images Save this story Save Save this story Save Google this week open sourced TensorFlow, its elegant and powerful artificial intelligence engine. Google uses this machine learning software internally to add capabilities like speech recognition and object detection to its products. Now, it’s available for everyone to use. What will this mean for the design of artificial intelligence systems? As wonderful as TensorFlow is, I fear that it may accelerate the design of AI systems that are hard to understand and hard to communicate with. I think it will focus our attention on experimenting with mathematical tricks, rather than on understanding human thought processes.
TensorFlow is aimed at the development of machine learning systems that require heavy numerical computation, like artificial neural networks (ANNs). The trouble with these systems is that they consist of millions of numbers—too many for people to sift through and make sense of.
Erik Mueller is Founder and CEO of Symbolic AI, LLC. Prior to that, Mueller was a Research Staff Member at IBM Research where he developed artificial intelligence systems including the Watson Jeopardy! system, Watson for Healthcare, WatsonPaths, dialogue systems, and needs-based recommendation systems. He won the AAAI Feigenbaum Prize with the IBM Watson Team. He has 8 patents on AI, with 6 more pending, and is the author of three books on artificial intelligence. He has a Ph.D. and M.S. in computer science from UCLA and an S.B. in computer science from MIT.
Suppose we train an ANN to recognize cats. When it recognizes a previously unseen cat in an image, it can’t explain to us why or how it did this. And if the ANN fails to recognize a spotted cat, it’s hard for us to fix the problem. We’re not going to tell it something like, “change element 341375’s value from 0.3265 to 0.4271, element 1954236’s value from 0.9218 to 0.8612, …” That would be a long list, and we don’t even know what numbers to change to get the desired result (this is especially true for end-users, but it's true for the researchers developing these systems, as well). More likely, we’ll either ignore the error, retrain the ANN with better cat data, or modify the training algorithm. These are blunt tools, because they don’t operate in the domain of interest, namely cats. Rather, they operate in the domain of ANNs. It would be better if we could simply tell the system that cats can be spotted.
I’d rather see us design AI systems that are understandable and communicative.
AI systems are going to be increasingly involved in our lives, as we turn to them more and more for important decision-making tasks. When AI systems make bad decisions ( as they've done before , and inevitably will again), we need to be able to understand why they made those decisions and communicate with them to fix the problem.
Below are some qualities an understandable and communicative AI system should possess.
It reasons similarly to how humans reason.
Person: Why isn’t Caitlyn answering my texts? Computer: Because she’s at school, and her phone is in her room.
It can engage in dialogues with us.
Computer: I can’t access the wired network.
Person: Why not? Computer: Because the Ethernet port isn’t connected to the network.
Person: How do you know? Computer: Because there is no voltage on any of the pins.
Person: What should I do? Computer: Try replacing the cable.
It can explain its reasoning and allow us to probe its reasoning.
Computer: Your credit card has been declined.
Person: Why? Computer: Because the transaction amount, $1,175.12, is greater than your available credit, $609.85.
Person: Why is that my available credit? Computer: Your available credit is your total credit limit, $10,000, minus your current balance of $9,038.34, minus your pending charges of $351.81.
It allows us to persuade it to adopt a different position.
Computer: You should not take any more acetaminophen.
Person: Why not? Computer: Because you are on warfarin. You should not take more than 2,000 mg of acetaminophen per day unless instructed by a physician.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Person: Dr. Gold told me to take it.
Computer: OK, but be careful.
It learns similarly to how people learn, through interaction rather than from large amounts of data.
Computer: Is your business a sole proprietorship, a non-profit, a corporation, or a partnership? Person: None of those. It’s an LLC.
Computer: An LLC. Got it.
How do we get an AI system to engage in these sorts of interactions? Earlier this year, Google unveiled a chatbot that could debate the meaning of life, or help a human user troubleshoot internet connectivity problems. The bot is impressive, but its responses are disconnected from the real world. For instance, when the chatbot tells a human something like “seems the mail is not loading,” it’s making this up. It hasn’t actually observed whether the mail has loaded or not. The machine's responses are based solely on training data and the history of the conversation at hand.
Similarly, I don't think that the Google chatbot will be able to reliably implement interactions like the credit limit example, given above, because the search space for the ANN representation of potential financial transactions is too large. I think that implementing these interactions reliably requires discrete, human-readable representations like equations, logical formulas, rules, frames, models, and diagrams. These representations avoid the added complexity of ANNs, so the search space is more tractable.
How do we implement the credit limit example? The AI system needs to be able to query the user’s financial information. A natural language parser would parse the incoming utterances, a rule-based dialogue manager would handle the incoming utterances and carry out the appropriate database queries, and a natural language generator would generate the appropriate responses.
How do we acquire human-readable representations? Acquisition of these representations needs to be a multifaceted process. An AI system can acquire human-readable representations as it interacts with people. We can also use machine learning to suggest human-readable representations, although these representations are often questionable to humans.
For example, here are some of the parts of a building as learned by one machine learning system: rubble , floor , facade , basement , roof , atrium , exterior , tenant , rooftop , and wreckage.
Parts like rubble and wreckage seem like strange additions to this list, because buildings are not in ruins most of the time. Here are some paraphrases of X asks Y, as learned by another machine learning system: X tells Y , X meets with Y , X informs Y , X contacts Y , and X writes to Y.
These are certainly related to X asks Y , but they are not synonymous in all contexts. And here is an event sequence for cooking as learned by yet another machine learning system: A boil B , A slice B , A peel B , A saute B , A cook B , A chop B.
To a human cook, many of these tasks appear to be out of order (one typically peels before slicing, and chops before cooking). These are top-ranked results generated by state-of-the-art machine learning systems. Lower ranked results are even worse.
Representations suggested by machine learning need to be vetted by humans, and not just because they contain errors. We need to examine machine-learned representations with a critical eye because, as humans, it’s up to us to decide what we want our world to look like.
We don’t have to accept incomprehensible and uncommunicative AI systems. We can build understandable and communicative systems that (1) learn human-understandable representations through interaction with users as well as manual curation of knowledge and (2) maintain human-understandable representations of the states of users and the world. It’s hard work, but it can be done.
Topics artificial intelligence deep learning neural networks UX/UI Wired Opinion design Gregory Barber Khari Johnson Steven Levy Alex Winter David Gilbert Kate Knibbs Will Knight Justin Pot Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
792 | 2,015 | "TensorFlow, Google's Open Source AI, Signals Big Changes in Hardware Too | WIRED" | "https://www.wired.com/2015/11/googles-open-source-ai-tensorflow-signals-fast-changing-hardware-world" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business TensorFlow, Google's Open Source AI, Signals Big Changes in Hardware Too Then One/WIRED Save this story Save Save this story Save In open sourcing its artificial intelligence engine—freely sharing one of its most important creations with the rest of the Internet— Google showed how the world of computer software is changing.
These days, the big Internet giants frequently share the software sitting at the heart of their online operations.
Open source accelerates the progress of technology.
In open sourcing its TensorFlow AI engine , Google can feed all sorts of machine-learning research outside the company, and in many ways, this research will feed back into Google.
But Google's AI engine also reflects how the world of computer hardware is changing. Inside Google, when tackling tasks like image recognition and speech recognition and language translation , TensorFlow depends on machines equipped with GPUs , or graphics processing units, chips that were originally designed to render graphics for games and the like, but have also proven adept at other tasks. And it depends on these chips more than the larger tech universe realizes.
According to Google engineer Jeff Dean, who helps oversee the company's AI work , Google uses GPUs not only in training its artificial intelligence services, but also in running these services—in delivering them to the smartphones held in the hands of consumers.
AI is playing an increasingly important role in the world's online services—and alternative chips are playing an increasingly important role in that AI.
That represents a significant shift. Today, inside its massive computer data centers, Facebook uses GPUs to train its face recognition services, but when delivering these services to Facebookers—actually identifying faces on its social networks—it uses traditional computer processors, or CPUs. And this basic setup is the industry norm, as Facebook CTO Mike "Schrep" Schroepfer recently pointed out during a briefing with reporters at the company's Menlo Park, California headquarters. But as Google seeks an ever greater level of efficiency, there are cases where the company both trains and executes its AI models on GPUs inside the data center. And it's not the only one moving in this direction. Chinese search giant Baidu is building a new AI system that works in much the same way. "This is quite a big paradigm change," says Baidu chief scientist Andrew Ng.
The change is good news for nVidia , the chip giant that specialized in GPUs. And it points to a gaping hole in the products offered by Intel, the world's largest chip maker. Intel doesn't build GPUs.
Some Internet companies and researchers , however, are now exploring FPGAs, or field-programmable gate arrays, as a replacement for GPUs in the AI arena, and Intel recently acquired a company that specializes in these programmable chips.
The bottom line is that AI is playing an increasingly important role in the world's online services—and alternative chip architectures are playing an increasingly important role in AI. Today, this is true inside the computer data centers that drive our online services, and in the years to come, the same phenomenon may trickle down to the mobile devices where we actually use these services.
At places like Google, Facebook , Microsoft , and Baidu , GPUs have proven remarkably important to so-called "deep learning" because they can process lots of little bits of data in parallel.
Deep learning relies on neural networks—systems that approximate the web of neurons in the human brain—and these networks are designed to analyze massive amounts of data at speed. In order to teach these networks how to recognize a cat, for instance, you feed them countless photos of cats. GPUs are good at this kind of thing. Plus, they don't consume as much power as CPUs.
But, typically, when these companies put deep learning into action—when they offer a smartphone app that recognizes cats, say—this app is driven by a data center system that runs on CPUs. According to Bryan Catanzaro, who oversees high-performance computing systems in the AI group at Baidu, that's because GPUs are only efficient if you're constantly feeding them data, and the data center server software that typically drives smartphone apps doesn't feed data to chips in this way. Typically, as requests arrive from smartphone apps, servers deal with them one at a time. As Catanzaro explains, if you use GPUs to separately process each request as it comes into the data center, "it's hard to get enough work into the GPU to keep it running efficiently. The GPU never really gets going." Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine Google Made a Chatbot That Debates the Meaning of Life Soon, Gmail’s AI Could Reply to Your Email for You That said, if you can consistently feed data into your GPUs during this execution stage, they can provide even greater efficiency than CPUs. Baidu is working towards this with its new AI platform. Basically, as requests stream into the data center, it packages multiple requests into a larger whole that can then be fed into the GPU. "We assemble these requests so that, instead of asking the processor to do one request at a time, we have it do multiple requests at a time," Catanzaro says. "This basically keeps the GPU busier." It's unclear how Google approaches this issue. But the company says there are already cases where TensorFlow runs on GPUs during the execution stage. "We sometimes use GPUs for both training and recognition, depending on the problem," confirms company spokesperson Jason Freidenfelds.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That may seem like a small thing. But it's actually a big deal. The systems that drive these AI applications span tens, hundreds, even thousands of machines. And these systems are playing an increasingly large role in our everyday lives. Google now uses deep learning not only to identify photos, recognize spoken words, and translate from one language to another, but also to boost search results. And other companies are pushing the same technology into ad targeting, computer security, and even applications that understand natural language. In other words, companies like Google and Baidu are gonna need an awful lot of GPUs.
At the same time, TensorFlow is also pushing some of this AI out of the data center entirely and onto the smartphones themselves.
Typically, when you use a deep learning app on your phone, it can't run without sending information back to the data center. All the AI happens there. When you bark a command into your Android phone, for instance, it must send your command to a Google data center, where it can processed on one of those enormous networks of CPUs or GPUs.
But Google has also honed its AI engine so that it, in some cases, it can execute on the phone itself. "You can take a model description and run it on a mobile phone," Dean says, "and you don't have to make any real changes to the model description or any of the code." This is how the company built its Google Translate app. Google trains the app to recognize words and translate them into another language inside its data centers, but once it's trained, the app can run on its own—without an Internet connection. You can point your phone a French road sign, and it will instantly translate it into English.
That's hard to do. After all, a phone offers limited amounts of processing power. But as time goes on, more and more of these tasks will move onto the phone itself. Deep learning software will improve, and mobile hardware will improve as well. "The future of deep learning is on small, mobile, edge devices," says Chris Nicholson, the founder of a deep learning startup called Skymind.
GPUs, for instance, are already starting to find their way onto phones, and hardware makers are always pushing to improve the speed and efficiency of CPUs. Meanwhile, IBM is building a "neuromorphic" chip that's designed specifically for AI tasks , and according to those who have used it, it's well suited to mobile devices.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Today, Google's AI engine runs on server CPUs and GPUs as well as chips commonly found in smartphones. But according to Google engineer Rajat Monga, the company built TensorFlow in a way that engineers can readily port it to other hardware platforms. Now that the tool is open source, outsiders can begin to do so, too. As Dean describes TensorFlow: "It should be portable to a wide variety of extra hardware." So, yes, the world of hardware is changing—almost as quickly as the world of software.
You may also like: Senior Writer X Topics artificial intelligence deep learning Enterprise Facebook Google Intel Microsoft NVIDIA Khari Johnson Paresh Dave Will Knight Khari Johnson Gregory Barber Will Knight Steven Levy Niamh Rowe Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
793 | 2,014 | "Google's Grand Plan to Make Your Brain Irrelevant | WIRED" | "https://www.wired.com/2014/01/google-buying-way-making-brain-irrelevant" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Marcus Wohlsen Business Google's Grand Plan to Make Your Brain Irrelevant Technology Brain Image: esenkartal/Getty Save this story Save Save this story Save Google is on a shopping spree, buying startup after startup to push its business into the future. But these companies don't run web services or sell ads or build smartphone software or dabble in other things that Google is best known for. The web's most powerful company is filling its shopping cart with artificial intelligence algorithms, robots, and smart gadgets for the home. It's on a mission to build an enormous digital brain that operates as much like the human mind as possible -- and, in many ways, even better.
Yesterday, Google confirmed that it has purchased a stealthy artificial intelligence startup called DeepMind.
According to reports, the company paid somewhere in the mid-hundreds of millions of dollars for the British outfit. Though Google didn't discuss the price tag, that enormous figure is in line with the rest of its recent activity.
>Lifelike robots, sentient machines, the Jetsons' smart home in the sky. Google is spending billions to make itself the place where these fantasies become facts.
The DeepMind acquisition closely follows Google's $3.2 billion purchase of smart thermostat and smoke alarm maker Nest, a slew of cutting-edge robotics companies , and another AI startup known as DNNresearch.
Google is looking to spread smart computer hardware into so many parts of our everyday lives -- from our homes and our cars to our bodies -- but perhaps more importantly, it's developing a new type of artificial intelligence that can help operate these devices, as well as its many existing web and smartphone services.
Though Google is out in front of this AI arms race, others are moving in the same direction. Facebook, IBM, and Microsoft are doubling down on artificial intelligence too, and are snapping up fresh AI talent.
According to The Information , Mark Zuckerberg and company were also trying to acquire DeepMind.
The New AI Google's web search engine already uses a powerful type of artificial intelligence to find what you're looking for in the chaos of the web, and it has built an insanely profitable ad business atop this engine. But recently, the company has been bulking up its roster of geniuses as it seeks to explore a new branch of artificial intelligence known as "deep learning." Basically, the idea is to mimic the biological structure of the human brain with software so that it can build machines that learn "organically" -- that is, without human involvement.
Google is already working to apply these insights to its familiar consumer products and services. Deep learning can help recognize what's in your photos without asking you to tag them yourself, and it can help understand human speech, a key tool for its smartphone apps and Google Glass computerized eyewear. But Google also sees the new AI as a better way to target ads -- the core of its business.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The DeepMind acquisition is one more step down this road. And though the company has not said as much, you can bet that this new form of AI will also play into things like Nest smart thermostats, the Google self-driving cars, and its big push into robotics.
A Century of Sci-Fi Dreams Come True At the moment, it seems, no other institution on earth has the concentration of brain power -- coupled with the money, technology, and freedom -- to chase the dreams that have fueled a century of science-fiction speculation. Lifelike robots, sentient machines, the Jetsons' smart home in the sky. Google is spending billions to make itself the place where these fantasies become facts.
In a profile of deep-learning pioneer and now part-time Googler Geoff Hinton, WIRED's Daniela Hernandez writes that the key difference between deep learning and other approaches to artificial intelligence is that it aims to free machines from the need for human intervention, to give them a human-like understanding of our environment. By building so-called neural networks that approximate the brain, Hinton and company are trying to make it possible for Google to understand language, speech, and the physical world without having to be told what its machines are seeing, hearing, or touching.
For many of us, Google already functions as an important part of what WIRED columnist Clive Thompson has called our outboard brain.
The more Google "knows," the less we have to remember. We just Google it. Now imagine that same kind of intelligence Google applies to the web set loose on your personal existence, not just online but out in the real world.
If its artificial intelligence dreams come true, Google might end up knowing you better than you know yourself. As we export more and more of our intelligence to Google, the question might become: What are our own brains for? Senior Editor X X Topics acquisitions artificial intelligence Google IoT Nest robots software Steven Levy Will Knight Steven Levy Vittoria Elliott Will Knight WIRED Staff Steven Levy Aarian Marshall Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
794 | 2,016 | "Forget Doomsday AI—Google Is Worried about Housekeeping Bots Gone Bad | WIRED" | "https://www.wired.com/2016/06/forget-doomsday-ai-google-worried-housekeeping-bots-gone-bad" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Cade Metz Business Forget Doomsday AI—Google Is Worried about Housekeeping Bots Gone Bad Getty Images Save this story Save Save this story Save Tom Murphy graduated from Carnegie Mellon University with a PhD in computer science. Then he built software that learned to play Nintendo games.
In some cases, the system works well.
Playing Super Mario, for instance, it learns to exploit a bug in the game, stomping on enemy Goombas even when floating below them. It can rack up points by attacking the game with a reckless abandon you and I would never try. But in other cases, it fizzles. It scores fewer points in Tetris than it would by merely placing blocks at random. And when it's on the verge of losing, it pauses the game---permanently. Like Joshua, the artificial intelligence in the 1983 sci-fi classic WarGames , Murphy's system appears to realize that sometimes the only winning move is not to play.
Murphy's software is far from the state-of-the-art. But it pretty much sums up the progress of modern artificial intelligence. It handles some tasks well. It's useless at others. And even at this early stage, it's learning to do stuff we humans would never do. You can see much the same thing in AlphaGo , the Google system that beat a grandmaster at the ancient game of Go. You even see it in simpler systems, like the image recognition inside Google Photo. These systems are becoming extremely powerful even as they remain extremely flawed, and as a result at least a little scary as they start to make unexpected decisions on their own.
At the moment, these decisions are largely harmless---but not always. Remember when Google's image recognition service started labeling black people as gorillas ? And as these technologies find their way into medical applications, robotics, and self-driving cars, AI has the potential to do real physical harm. "We're starting to get into gray areas. We don't always know which inputs yield which outputs," says Alexander Reben, a roboticist and artist in Berkeley, California, whose work aims to dramatize these concerns. "We're unable to understand what the machine is doing." That's why some of the most prominent names in AI are now working to develop ways of dealing with what might go wrong. Today, along with researchers from Stanford University, UC Berkeley, and the Elon Musk-led startup OpenAI , a team of Google AI specialists proposed a way to address these issues by building a framework for addressing AI safety risks. "Most previous discussion has been very hypothetical and speculative," Google researcher Chris Olah wrote in a blog post about their proposal. "We believe it’s essential to ground concerns in real machine learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably." In their paper , Olah and his colleagues look at the example of a robot that learns to clean. The more pressing worries aren't apocalyptic---that humans won't be able to shut the machine down or that it will somehow destroy us all. They're more concerned that this cleaning robot will learn to do stuff that just doesn't make sense---kinda like Murphy's bot learning to permanently pause a game of Tetris. What if the robot learns to knock over a vase because that lets it clean faster? What if it games the system by covering over messes instead of cleaning them? How do you prevent the machine from doing stupid, harmful stuff like sticking a wet mop in an outlet? How do you tell it that lessons learned in the home may not apply to the office? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Olah and his collaborators lay out several concrete principles for AI researchers, from "avoiding negative side effects" (not knocking over the vase) to "safe exploration" (not sticking the mop in the outlet). The concerns are practical—it's in how to address them that the uncertainty remains. Still, that's kind of the point: Because no one has good answers, it's time to start looking for them. AI is advancing too fast not to.
A system like AlphaGo learns by analyzing vast amounts of data.
But it also learns by operating on its own.
Through a technique called reinforcement learning, it plays game after game against itself, carefully tracking which moves bring the most territory on the board. In this way, AlphaGo learns to make moves no human has ever made---for better or for worse. Now, Google is using similar techniques to train not only popular online services like its search engine but robots and self-driving cars. And these machines will behave in their own unpredictable ways.
Self-Driving Cars Will Teach Themselves to Save Lives—But Also Take Them What the AI Behind AlphaGo Can Teach Us About Being Human Soon We Won’t Program Computers. We’ll Train Them Like Dogs Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine "You can build robotics that does some of this stuff now ," Reben says of such unexpected behaviors. Reben recently built a robot that decides---all on its own---whether or not to prick your finger. This shows, he explains, why we must tackle safety concerns now, not later.
And indeed, Olah and his crew are not the only ones working on these problems. DeepMind, the Google-owned lab responsible for AlphaGo, is exploring the possibility of an AI "kill switch" that would prevent machines from spinning beyond human control. If an AI learns to override what humans tell it to do, a kill switch would still let people shut it down.
Machines can't make the hard calls themselves yet, because they don't understand morality. But Ken Forbus, an AI researcher at Northwestern, is trying to fix that. Using a " Structure Mapping Engine ," he and his colleagues are feeding simple stories---morality plays---into machines in the hope that they will grasp the implicit moral lessons. It'd be a kind of synthetic conscience. "You can use stories to beef up the machines' reasoning," Forbus says. "You can---in theory---teach it to behave more like people would." In theory. Creating a truly moral machine is a long way off—if it's possible at all. After all, if we humans can't agree on what is moral, how can we program morality into machines? While humans quibble, machines get smarter—whether or not they know right from wrong. The question isn’t whether machines will ever be able to beat Tetris without cheating. It’s whether they’ll ever learn that they shouldn’t cheat.
Senior Writer X Topics artificial intelligence Enterprise ethics Google machine learning Will Knight Khari Johnson Will Knight Will Bedingfield Will Knight Will Knight Gregory Barber Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
795 | 2,000 | "Why the Future Doesn't Need Us | WIRED" | "https://www.wired.com/2000/04/joy-2" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Bill Joy Backchannel Why the Future Doesn't Need Us PHOTOGRAPHS: LEFT: PETER MENZEL; CENTER, TOP TO BOTTOM: GALLERIA DELLʼACADEMIA, FLORENCE/CANALI PHOTOBANK , MILAN/SUPERSTOCK ; TED SPIEGEL/CORBIS; ROGER RESSMEYER/CORBIS; ROGER RESSMEYER/CORBIS; S. MILLER/CUSTOM MEDICAL STOCK ; RIGHT: EVERETT COLLECTION Save this story Save Save this story Save From the moment I became involved in the creation of new technologies, their ethical dimensions have concerned me, but it was only in the autumn of 1998 that I became anxiously aware of how great are the dangers facing us in the 21st century. I can date the onset of my unease to the day I met Ray Kurzweil , the deservedly famous inventor of the first reading machine for the blind and many other amazing things.
Ray and I were both speakers at George Gilder 's Telecosm conference, and I encountered him by chance in the bar of the hotel after both our sessions were over. I was sitting with John Searle, a Berkeley philosopher who studies consciousness. While we were talking, Ray approached and a conversation began, the subject of which haunts me to this day.
I had missed Ray's talk and the subsequent panel that Ray and John had been on, and they now picked right up where they'd left off, with Ray saying that the rate of improvement of technology was going to accelerate and that we were going to become robots or fuse with robots or something like that, and John countering that this couldn't happen, because the robots couldn't be conscious.
While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility. I was taken aback, especially given Ray's proven ability to imagine and create the future. I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.
It's easy to get jaded about such breakthroughs. We hear in the news almost every day of some kind of technological or scientific advance. Yet this was no ordinary prediction. In the hotel bar, Ray gave me a partial preprint of his then-forthcoming book The Age of Spiritual Machines , which outlined a utopia he foresaw—one in which humans gained near immortality by becoming one with robotic technology. On reading it, my sense of unease only intensified; I felt sure he had to be understating the dangers, understating the probability of a bad outcome along this path.
I found myself most troubled by a passage detailing a dystopian scenario: First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.
1 In the book, you don't discover until you turn the page that the author of this passage is Theodore Kaczynski—the Unabomber. I am no apologist for Kaczynski. His bombs killed three people during a 17-year terror campaign and wounded many others. One of his bombs gravely injured my friend David Gelernter, one of the most brilliant and visionary computer scientists of our time. Like many of my colleagues, I felt that I could easily have been the Unabomber's next target.
Kaczynski's actions were murderous and, in my view, criminally insane. He is clearly a Luddite, but simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage. I felt compelled to confront it.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kaczynski's dystopian vision describes unintended consequences, a well-known problem with the design and use of technology, and one that is clearly related to Murphy's law—“Anything that can go wrong, will.” (Actually, this is Finagle's law, which in itself shows that Finagle was right.) Our overuse of antibiotics has led to what may be the biggest such problem so far: the emergence of antibiotic-resistant and much more dangerous bacteria. Similar things happened when attempts to eliminate malarial mosquitoes using DDT caused them to acquire DDT resistance; malarial parasites likewise acquired multi-drug-resistant genes.
2 The cause of many such surprises seems clear: The systems involved are complex, involving interaction among and feedback between many parts. Any changes to such a system will cascade in ways that are difficult to predict; this is especially true when human actions are involved.
I started showing friends the Kaczynski quote from The Age of Spiritual Machines ; I would hand them Kurzweil's book, let them read the quote, and then watch their reaction as they discovered who had written it. At around the same time, I found Hans Moravec's book Robot: Mere Machine to Transcendent Mind.
Moravec is one of the leaders in robotics research, and was a founder of the world's largest robotics research program, at Carnegie Mellon University.
Robot gave me more material to try out on my friends—material surprisingly supportive of Kaczynski's argument. For example: Biological species almost never survive encounters with superior competitors. Ten million years ago, South and North America were separated by a sunken Panama isthmus. South America, like Australia today, was populated by marsupial mammals, including pouched equivalents of rats, deers, and tigers. When the isthmus connecting North and South America rose, it took only a few thousand years for the northern placental species, with slightly more effective metabolisms and reproductive and nervous systems, to displace and eliminate almost all the southern marsupials.
In a completely free marketplace, superior robots would surely affect humans as North American placentals affected South American marsupials (and as humans have affected countless species). Robotic industries would compete vigorously among themselves for matter, energy, and space, incidentally driving their price beyond human reach. Unable to afford the necessities of life, biological humans would be squeezed out of existence.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There is probably some breathing room, because we do not live in a completely free marketplace. Government coerces nonmarket behavior, especially by collecting taxes. Judiciously applied, governmental coercion could support human populations in high style on the fruits of robot labor, perhaps for a long while.
A textbook dystopia—and Moravec is just getting wound up. He goes on to discuss how our main job in the 21st century will be “ensuring continued cooperation from the robot industries” by passing laws decreeing that they be “nice,” and to describe how seriously dangerous a human can be “once transformed into an unbounded superintelligent robot.” 3 Moravec's view is that the robots will eventually succeed us—that humans clearly face extinction.
I decided it was time to talk to my friend Danny Hillis. Danny became famous as the cofounder of Thinking Machines Corporation, which built a very powerful parallel supercomputer. Despite my current job title of Chief Scientist at Sun Microsystems, I am more a computer architect than a scientist, and I respect Danny's knowledge of the information and physical sciences more than that of any other single person I know. Danny is also a highly regarded futurist who thinks long-term—four years ago he started the Long Now Foundation, which is building a clock designed to last 10,000 years, in an attempt to draw attention to the pitifully short attention span of our society. (See “ Test of Time ,” Wired 8.03.) So I flew to Los Angeles for the express purpose of having dinner with Danny and his wife, Pati. I went through my now-familiar routine, trotting out the ideas and passages that I found so disturbing. Danny's answer—directed specifically at Kurzweil's scenario of humans merging with robots—came swiftly, and quite surprised me. He said, simply, that the changes would come gradually, and that we would get used to them.
But I guess I wasn't totally surprised. I had seen a quote from Danny in Kurzweil's book in which he said, “I'm as fond of my body as anyone, but if I can be 200 with a body of silicon, I'll take it.” It seemed that he was at peace with this process and its attendant risks, while I was not.
While talking and thinking about Kurzweil, Kaczynski, and Moravec, I suddenly remembered a novel I had read almost 20 years ago - The White Plague , by Frank Herbert—in which a molecular biologist is driven insane by the senseless murder of his family. To seek revenge he constructs and disseminates a new and highly contagious plague that kills widely but selectively. (We're lucky Kaczynski was a mathematician, not a molecular biologist.) I was also reminded of the Borg of Star Trek , a hive of partly biological, partly robotic creatures with a strong destructive streak. Borg-like disasters are a staple of science fiction, so why hadn't I been more concerned about such robotic dystopias earlier? Why weren't other people more concerned about these nightmarish scenarios? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Part of the answer certainly lies in our attitude toward the new—in our bias toward instant familiarity and unquestioning acceptance. Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies—robotics, genetic engineering, and nanotechnology—pose a different threat than the technologies that have come before. Specifically, robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once—but one bot can become many, and quickly get out of control.
Much of my work over the past 25 years has been on computer networking, where the sending and receiving of messages creates the opportunity for out-of-control replication. But while replication in a computer or a computer network can be a nuisance, at worst it disables a machine or takes down a network or network service. Uncontrolled self-replication in these newer technologies runs a much greater risk: a risk of substantial damage in the physical world.
Each of these technologies also offers untold promise: The vision of near immortality that Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average life span and improve the quality of our lives. Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger.
What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD)—nuclear, biological, and chemical (NBC)—were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare—indeed, effectively unavailable—raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.
The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.
Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals.
Photograph: Catherine Opie Nothing about the way I got involved with computers suggested to me that I was going to be facing these kinds of issues.
My life has been driven by a deep need to ask questions and find answers. When I was 3, I was already reading, so my father took me to the elementary school, where I sat on the principal's lap and read him a story. I started school early, later skipped a grade, and escaped into books—I was incredibly motivated to learn. I asked lots of questions, often driving adults to distraction.
As a teenager I was very interested in science and technology. I wanted to be a ham radio operator but didn't have the money to buy the equipment. Ham radio was the Internet of its time: very addictive, and quite solitary. Money issues aside, my mother put her foot down—I was not to be a ham; I was antisocial enough already.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg I may not have had many close friends, but I was awash in ideas. By high school, I had discovered the great science fiction writers. I remember especially Heinlein's Have Spacesuit Will Travel and Asimov’s I, Robot , with its Three Laws of Robotics. I was enchanted by the descriptions of space travel, and wanted to have a telescope to look at the stars; since I had no money to buy or make one, I checked books on telescope-making out of the library and read about making them instead. I soared in my imagination.
Thursday nights my parents went bowling, and we kids stayed home alone. It was the night of Gene Roddenberry's original Star Trek , and the program made a big impression on me. I came to accept its notion that humans had a future in space, Western-style, with big heroes and adventures. Roddenberry's vision of the centuries to come was one with strong moral values, embodied in codes like the Prime Directive: to not interfere in the development of less technologically advanced civilizations. This had an incredible appeal to me; ethical humans, not robots, dominated this future, and I took Roddenberry's dream as part of my own.
I excelled in mathematics in high school, and when I went to the University of Michigan as an undergraduate engineering student I took the advanced curriculum of the mathematics majors. Solving math problems was an exciting challenge, but when I discovered computers I found something much more interesting: a machine into which you could put a program that attempted to solve a problem, after which the machine quickly checked the solution. The computer had a clear notion of correct and incorrect, true and false. Were my ideas correct? The machine could tell me. This was very seductive.
I was lucky enough to get a job programming early supercomputers and discovered the amazing power of large machines to numerically simulate advanced designs. When I went to graduate school at UC Berkeley in the mid-1970s, I started staying up late, often all night, inventing new worlds inside the machines. Solving problems. Writing the code that argued so strongly to be written.
In The Agony and the Ecstasy , Irving Stone's biographical novel of Michelangelo, Stone described vividly how Michelangelo released the statues from the stone, “breaking the marble spell,” carving from the images in his mind.
4 In my most ecstatic moments, the software in the computer emerged in the same way. Once I had imagined it in my mind I felt that it was already there in the machine, waiting to be released. Staying up all night seemed a small price to pay to free it—to give the ideas concrete form.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After a few years at Berkeley I started to send out some of the software I had written—an instructional Pascal system, Unix utilities, and a text editor called vi (which is still, to my surprise, widely used more than 20 years later)—to others who had similar small PDP-11 and VAX minicomputers. These adventures in software eventually turned into the Berkeley version of the Unix operating system, which became a personal “success disaster”—so many people wanted it that I never finished my PhD. Instead I got a job working for Darpa putting Berkeley Unix on the Internet and fixing it to be reliable and to run large research applications well. This was all great fun and very rewarding. And, frankly, I saw no robots here, or anywhere near.
Still, by the early 1980s, I was drowning. The Unix releases were very successful, and my little project of one soon had money and some staff, but the problem at Berkeley was always office space rather than money—there wasn't room for the help the project needed, so when the other founders of Sun Microsystems showed up I jumped at the chance to join them. At Sun, the long hours continued into the early days of workstations and personal computers, and I have enjoyed participating in the creation of advanced microprocessor technologies and Internet technologies such as Java and Jini.
From all this, I trust it is clear that I am not a Luddite. I have always, rather, had a strong belief in the value of the scientific search for truth and in the ability of great engineering to bring material progress. The Industrial Revolution has immeasurably improved everyone's life over the last couple hundred years, and I always expected my career to involve the building of worthwhile solutions to real problems, one problem at a time.
I have not been disappointed. My work has had more impact than I had ever hoped for and has been more widely used than I could have reasonably expected. I have spent the last 20 years still trying to figure out how to make computers as reliable as I want them to be (they are not nearly there yet) and how to make them simple to use (a goal that has met with even less relative success). Despite some progress, the problems that remain seem even more daunting.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But while I was aware of the moral dilemmas surrounding technology's consequences in fields like weapons research, I did not expect that I would confront such issues in my own field, or at least not so soon.
Perhaps it is always hard to see the bigger impact while you are in the vortex of a change. Failing to understand the consequences of our inventions while we are in the rapture of discovery and innovation seems to be a common fault of scientists and technologists; we have long been driven by the overarching desire to know that is the nature of science's quest, not stopping to notice that the progress to newer and more powerful technologies can take on a life of its own.
I have long realized that the big advances in information technology come not from the work of computer scientists, computer architects, or electrical engineers, but from that of physical scientists. The physicists Stephen Wolfram and Brosl Hasslacher introduced me, in the early 1980s, to chaos theory and nonlinear systems. In the 1990s, I learned about complex systems from conversations with Danny Hillis, the biologist Stuart Kauffman, the Nobel-laureate physicist Murray Gell-Mann, and others. Most recently, Hasslacher and the electrical engineer and device physicist Mark Reed have been giving me insight into the incredible possibilities of molecular electronics.
In my own work, as codesigner of three microprocessor architectures—SPARC, picoJava, and MAJC—and as the designer of several implementations thereof, I've been afforded a deep and firsthand acquaintance with Moore's law. For decades, Moore's law has correctly predicted the exponential rate of improvement of semiconductor technology. Until last year I believed that the rate of advances predicted by Moore's law might continue only until roughly 2010, when some physical limits would begin to be reached. It was not obvious to me that a new technology would arrive in time to keep performance advancing smoothly.
But because of the recent rapid and radical progress in molecular electronics—where individual atoms and molecules replace lithographically drawn transistors—and related nanoscale technologies, we should be able to meet or exceed the Moore's law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today—sufficient to implement the dreams of Kurzweil and Moravec.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As this enormous computing power is combined with the manipulative advances of the physical sciences and the new, deep understandings in genetics, enormous transformative power is being unleashed. These combinations open up the opportunity to completely redesign the world, for better or worse: The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor.
In designing software and microprocessors, I have never had the feeling that I was designing an intelligent machine. The software and hardware is so fragile and the capabilities of the machine to “think” so clearly absent that, even as a possibility, this has always seemed very far in the future.
But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of the technology that may replace our species. How do I feel about this? Very uncomfortable. Having struggled my entire career to build reliable software systems, it seems to me more than likely that this future will not work out as well as some people may imagine. My personal experience suggests we tend to overestimate our design abilities.
Given the incredible power of these new technologies, shouldn't we be asking how we can best coexist with them? And if our own extinction is a likely, or even possible, outcome of our technological development, shouldn't we proceed with great caution? PHOTOGRAPHS CLOCK WISE FROM TOP LEFT: DAN MCCOY/RAINBOW; MEHAU KULYK/SCIENCE PHOTO LIBRARY; ROGER RESSMEYER/CORBIS; PETER MENZEL; PARAMOUNT/NBC/MPTV Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The dream of robotics is, first, that intelligent machines can do our work for us, allowing us lives of leisure, restoring us to Eden. Yet in his history of such ideas, Darwin Among the Machines , George Dyson warns: “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” As we have seen, Moravec agrees, believing we may well not survive the encounter with the superior robot species.
How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself.
A second dream of robotics is that we will gradually replace ourselves with our robotic technology, achieving near immortality by downloading our consciousnesses; it is this process that Danny Hillis thinks we will gradually get used to and that Ray Kurzweil elegantly details in The Age of Spiritual Machines.
(We are beginning to see intimations of this in the implantation of computer devices into the human body, as illustrated on the cover of Wired 8.02.) But if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human? It seems to me far more likely that a robotic existence would not be like a human one in any sense that we understand, that the robots would in no sense be our children, that on this path our humanity may well be lost.
Genetic engineering promises to revolutionize agriculture by increasing crop yields while reducing the use of pesticides; to create tens of thousands of novel species of bacteria, plants, viruses, and animals; to replace reproduction, or supplement it, with cloning; to create cures for many diseases, increasing our life span and our quality of life; and much, much more. We now know with certainty that these profound changes in the biological sciences are imminent and will challenge all our notions of what life is.
Technologies such as human cloning have in particular raised our awareness of the profound ethical and moral issues we face. If, for example, we were to reengineer ourselves into several separate and unequal species using the power of genetic engineering, then we would threaten the notion of equality that is the very cornerstone of our democracy.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Given the incredible power of genetic engineering, it's no surprise that there are significant safety issues in its use. My friend Amory Lovins recently cowrote, along with Hunter Lovins, an editorial that provides an ecological view of some of these dangers. Among their concerns: that “the new botany aligns the development of plants with their economic, not evolutionary, success.” (See “ A Tale of Two Botanies ”) Amory's long career has been focused on energy and resource efficiency by taking a whole-system view of human-made systems; such a whole-system view often finds simple, smart solutions to otherwise seemingly difficult problems, and is usefully applied here as well.
After reading the Lovins' editorial, I saw an op-ed by Gregg Easterbrook in The New York Times (November 19, 1999) about genetically engineered crops, under the headline: “Food for the Future: Someday, rice will have built-in vitamin A. Unless the Luddites win.” Are Amory and Hunter Lovins Luddites? Certainly not. I believe we all would agree that golden rice, with its built-in vitamin A, is probably a good thing, if developed with proper care and respect for the likely dangers in moving genes across species boundaries.
Awareness of the dangers inherent in genetic engineering is beginning to grow, as reflected in the Lovins’ editorial. The general public is aware of, and uneasy about, genetically modified foods, and seems to be rejecting the notion that such foods should be permitted to be unlabeled.
But genetic engineering technology is already very far along. As the Lovins note, the USDA has already approved about 50 genetically engineered crops for unlimited release; more than half of the world's soybeans and a third of its corn now contain genes spliced in from other forms of life.
While there are many important issues here, my own major concern with genetic engineering is narrower: that it gives the power—whether militarily, accidentally, or in a deliberate terrorist act—to create a White Plague.
The many wonders of nanotechnology were first imagined by the Nobel-laureate physicist Richard Feynman in a speech he gave in 1959, subsequently published under the title “There's Plenty of Room at the Bottom.” The book that made a big impression on me, in the mid-‘80s, was Eric Drexler's Engines of Creation , in which he described beautifully how manipulation of matter at the atomic level could create a utopian future of abundance, where just about everything could be made cheaply, and almost any imaginable disease or physical problem could be solved using nanotechnology and artificial intelligences.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A subsequent book, Unbounding the Future: The Nanotechnology Revolution , which Drexler cowrote, imagines some of the changes that might take place in a world where we had molecular-level “assemblers.” Assemblers could make possible incredibly low-cost solar power, cures for cancer and the common cold by augmentation of the human immune system, essentially complete cleanup of the environment, incredibly inexpensive pocket supercomputers—in fact, any product would be manufacturable by assemblers at a cost no greater than that of wood—spaceflight more accessible than transoceanic travel today, and restoration of extinct species.
I remember feeling good about nanotechnology after reading Engines of Creation.
As a technologist, it gave me a sense of calm—that is, nanotechnology showed us that incredible progress was possible, and indeed perhaps inevitable. If nanotechnology was our future, then I didn't feel pressed to solve so many problems in the present. I would get to Drexler's utopian future in due time; I might as well enjoy life more in the here and now. It didn't make sense, given his vision, to stay up all night, all the time.
Drexler's vision also led to a lot of good fun. I would occasionally get to describe the wonders of nanotechnology to others who had not heard of it. After teasing them with all the things Drexler described I would give a homework assignment of my own: “Use nanotechnology to create a vampire; for extra credit create an antidote.” With these wonders came clear dangers, of which I was acutely aware. As I said at a nanotechnology conference in 1989, “We can't simply do our science and not worry about these ethical issues.” 5 But my subsequent conversations with physicists convinced me that nanotechnology might not even work—or, at least, it wouldn't work anytime soon. Shortly thereafter I moved to Colorado, to a skunk works I had set up, and the focus of my work shifted to software for the Internet, specifically on ideas that became Java and Jini.
Then, last summer, Brosl Hasslacher told me that nanoscale molecular electronics was now practical. This was new news, at least to me, and I think to many people—and it radically changed my opinion about nanotechnology. It sent me back to Engines of Creation.
Rereading Drexler's work after more than 10 years, I was dismayed to realize how little I had remembered of its lengthy section called “Dangers and Hopes,” including a discussion of how nanotechnologies can become “engines of destruction.” Indeed, in my rereading of this cautionary material today, I am struck by how naive some of Drexler's safeguard proposals seem, and how much greater I judge the dangers to be now than even he seemed to then. (Having anticipated and described many technical and political problems with nanotechnology, Drexler started the Foresight Institute in the late 1980s “to help prepare society for anticipated advanced technologies”—most important, nanotechnology.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The enabling breakthrough to assemblers seems quite likely within the next 20 years. Molecular electronics—the new subfield of nanotechnology where individual molecules are circuit elements—should mature quickly and become enormously lucrative within this decade, causing a large incremental investment in all nanotechnologies.
Unfortunately, as with nuclear technology, it is far easier to create destructive uses for nanotechnology than constructive ones. Nanotechnology has clear military and terrorist uses, and you need not be suicidal to release a massively destructive nanotechnological device—such devices can be built to be selectively destructive, affecting, for example, only a certain geographical area or a group of people who are genetically distinct.
An immediate consequence of the Faustian bargain in obtaining the great power of nanotechnology is that we run a grave risk—the risk that we might destroy the biosphere on which all life depends.
As Drexler explained: “Plants” with “leaves” no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation. We have trouble enough controlling viruses and fruit flies.
Among the cognoscenti of nanotechnology, this threat has become known as the “gray goo problem.” Though masses of uncontrolled replicators need not be gray or gooey, the term “gray goo” emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable.
The gray goo threat makes one thing perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers.
Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident.
6 Oops.
PHOTOGRAPHS: CLOCK WISE FROM TOP LEFT: AP/WIDE WORLD PHOTOS; AC TION PRESS/SABA; THOMAS MARENT/EURELIOS; LOWELL GEOR GIA/CORBIS; JONATHAN BLAIR/CORBIS; LOWELL GEORGIA/CORBIS Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It is most of all the power of destructive self-replication in genetics, nanotechnology, and robotics (GNR) that should give us pause. Self-replication is the modus operandi of genetic engineering, which uses the machinery of the cell to replicate its designs, and the prime danger underlying gray goo in nanotechnology. Stories of run-amok robots like the Borg, replicating or mutating to escape from the ethical constraints imposed on them by their creators, are well established in our science fiction books and movies. It is even possible that self-replication may be more fundamental than we thought, and hence harder—or even impossible—to control. A recent article by Stuart Kauffman in Nature titled “Self-Replication: Even Peptides Do It” discusses the discovery that a 32-amino-acid peptide can “autocatalyse its own synthesis.” We don't know how widespread this ability is, but Kauffman notes that it may hint at “a route to self-reproducing molecular systems on a basis far wider than Watson-Crick base-pairing.” 7 In truth, we have had in hand for years clear warnings of the dangers inherent in widespread knowledge of GNR technologies—of the possibility of knowledge alone enabling mass destruction. But these warnings haven't been widely publicized; the public discussions have been clearly inadequate. There is no profit in publicizing the dangers.
The nuclear, biological, and chemical (NBC) technologies used in 20th-century weapons of mass destruction were and are largely military, developed in government laboratories. In sharp contrast, the 21st-century GNR technologies have clear commercial uses and are being developed almost exclusively by corporate enterprises. In this age of triumphant commercialism, technology—with science as its handmaiden—is delivering a series of almost magical inventions that are the most phenomenally lucrative ever seen. We are aggressively pursuing the promises of these new technologies within the now-unchallenged system of global capitalism and its manifold financial incentives and competitive pressures.
This is the first moment in the history of our planet when any species, by its own voluntary actions, has become a danger to itself—as well as to vast numbers of others.
It might be a familiar progression, transpiring on many worlds—a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, both on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That is Carl Sagan, writing in 1994, in Pale Blue Dot , a book describing his vision of the human future in space. I am only now realizing how deep his insight was, and how sorely I miss, and will miss, his voice. For all its eloquence, Sagan's contribution was not least that of simple common sense—an attribute that, along with humility, many of the leading advocates of the 21st-century technologies seem to lack.
I remember from my childhood that my grandmother was strongly against the overuse of antibiotics. She had worked since before the first World War as a nurse and had a commonsense attitude that taking antibiotics, unless they were absolutely necessary, was bad for you.
It is not that she was an enemy of progress. She saw much progress in an almost 70-year nursing career; my grandfather, a diabetic, benefited greatly from the improved treatments that became available in his lifetime. But she, like many levelheaded people, would probably think it greatly arrogant for us, now, to be designing a robotic “replacement species,” when we obviously have so much trouble making relatively simple things work, and so much trouble managing—or even understanding—ourselves.
I realize now that she had an awareness of the nature of the order of life, and of the necessity of living with and respecting that order. With this respect comes a necessary humility that we, with our early-21st-century chutzpah, lack at our peril. The commonsense view, grounded in this respect, is often right, in advance of the scientific evidence. The clear fragility and inefficiencies of the human-made systems we have built should give us all pause; the fragility of the systems I have worked on certainly humbles me.
We should have learned a lesson from the making of the first atomic bomb and the resulting arms race. We didn't do well then, and the parallels to our current situation are troubling.
The effort to build the first atomic bomb was led by the brilliant physicist J. Robert Oppenheimer. Oppenheimer was not naturally interested in politics but became painfully aware of what he perceived as the grave threat to Western civilization from the Third Reich, a threat surely grave because of the possibility that Hitler might obtain nuclear weapons. Energized by this concern, he brought his strong intellect, passion for physics, and charismatic leadership skills to Los Alamos and led a rapid and successful effort by an incredible collection of great minds to quickly invent the bomb.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What is striking is how this effort continued so naturally after the initial impetus was removed. In a meeting shortly after V-E Day with some physicists who felt that perhaps the effort should stop, Oppenheimer argued to continue. His stated reason seems a bit strange: not because of the fear of large casualties from an invasion of Japan, but because the United Nations, which was soon to be formed, should have foreknowledge of atomic weapons. A more likely reason the project continued is the momentum that had built up—the first atomic test, Trinity, was nearly at hand.
We know that in preparing this first atomic test the physicists proceeded despite a large number of possible dangers. They were initially worried, based on a calculation by Edward Teller, that an atomic explosion might set fire to the atmosphere. A revised calculation reduced the danger of destroying the world to a three-in-a-million chance. (Teller says he was later able to dismiss the prospect of atmospheric ignition entirely.) Oppenheimer, though, was sufficiently concerned about the result of Trinity that he arranged for a possible evacuation of the southwest part of the state of New Mexico. And, of course, there was the clear danger of starting a nuclear arms race.
Within a month of that first, successful test, two atomic bombs destroyed Hiroshima and Nagasaki. Some scientists had suggested that the bomb simply be demonstrated, rather than dropped on Japanese cities—saying that this would greatly improve the chances for arms control after the war—but to no avail. With the tragedy of Pearl Harbor still fresh in Americans' minds, it would have been very difficult for President Truman to order a demonstration of the weapons rather than use them as he did—the desire to quickly end the war and save the lives that would have been lost in any invasion of Japan was very strong. Yet the overriding truth was probably very simple: As the physicist Freeman Dyson later said, “The reason that it was dropped was just that nobody had the courage or the foresight to say no.” It's important to realize how shocked the physicists were in the aftermath of the bombing of Hiroshima, on August 6, 1945. They describe a series of waves of emotion: first, a sense of fulfillment that the bomb worked, then horror at all the people that had been killed, and then a convincing feeling that on no account should another bomb be dropped. Yet of course another bomb was dropped, on Nagasaki, only three days after the bombing of Hiroshima.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In November 1945, three months after the atomic bombings, Oppenheimer stood firmly behind the scientific attitude, saying, “It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge and are willing to take the consequences.” Oppenheimer went on to work, with others, on the Acheson-Lilienthal report, which, as Richard Rhodes says in his recent book Visions of Technology , “found a way to prevent a clandestine nuclear arms race without resorting to armed world government”; their suggestion was a form of relinquishment of nuclear weapons work by nation-states to an international agency.
This proposal led to the Baruch Plan, which was submitted to the United Nations in June 1946 but never adopted (perhaps because, as Rhodes suggests, Bernard Baruch had “insisted on burdening the plan with conventional sanctions,” thereby inevitably dooming it, even though it would “almost certainly have been rejected by Stalinist Russia anyway”). Other efforts to promote sensible steps toward internationalizing nuclear power to prevent an arms race ran afoul either of US politics and internal distrust, or distrust by the Soviets. The opportunity to avoid the arms race was lost, and very quickly.
Two years later, in 1948, Oppenheimer seemed to have reached another stage in his thinking, saying, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge they cannot lose.” In 1949, the Soviets exploded an atom bomb. By 1955, both the US and the Soviet Union had tested hydrogen bombs suitable for delivery by aircraft. And so the nuclear arms race began.
Nearly 20 years ago, in the documentary The Day After Trinity , Freeman Dyson summarized the scientific attitudes that brought us to the nuclear precipice: “I have felt it myself. The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it's there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles—this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.” 8 Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Now, as then, we are creators of new technologies and stars of the imagined future, driven—this time by great financial rewards and global competition—despite the clear dangers, hardly evaluating what it may be like to try to live in a world that is the realistic outcome of what we are creating and imagining.
PHOTOGRAPHS FROM LEFT: OUTLINE; GORDON KASZERMAN/NASA/ZUMA; GORDON KASZERMAN/NASA/ZUMA PHOTOGR APHS: INSET TOP TO BOTTOM: HULTON GETTY/LIAISON AGENCY; BETTMANN/CORBIS; HULTON GETTY/LIAISON AGENCY; ROGER RESSMEYER/ CORBIS PHOTOGRAPHS: CLOCKWISE FROM LEFT: HENRI CARTIER-BRESSON/MAGNUM; HULTON GETTY/LIAISON AGENCY; HULTON GETTY/LIAISON AGENCY; HULTON GETTY/LIAISON AGENCY;BETTMANN/CORBIS; UNDERWOOD PHOTO ARCHIVES; HULTON GETTY/LIAISON AGENCY; HULTON GETTY/LIAISON AGENCY; HULTON GETTY/LIAISON AGENCY; Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 1947, The Bulletin of the Atomic Scientists began putting a Doomsday Clock on its cover. For more than 50 years, it has shown an estimate of the relative nuclear danger we have faced, reflecting the changing international conditions. The hands on the clock have moved 15 times and today, standing at nine minutes to midnight, reflect continuing and real danger from nuclear weapons. The recent addition of India and Pakistan to the list of nuclear powers has increased the threat of failure of the nonproliferation goal, and this danger was reflected by moving the hands closer to midnight in 1998.
In our time, how much danger do we face, not just from nuclear weapons, but from all of these technologies? How high are the extinction risks? The philosopher John Leslie has studied this question and concluded that the risk of human extinction is at least 30 percent, while Ray Kurzweil believes we have “a better than even chance of making it through,” with the caveat that he has “always been accused of being an optimist.” 9 Not only are these estimates not encouraging, but they do not include the probability of many horrid outcomes that lie short of extinction.
Faced with such assessments, some serious people are already suggesting that we simply move beyond Earth as quickly as possible. We would colonize the galaxy using von Neumann probes, which hop from star system to star system, replicating as they go. This step will almost certainly be necessary 5 billion years from now (or sooner if our solar system is disastrously impacted by the impending collision of our galaxy with the Andromeda galaxy within the next 3 billion years), but if we take Kurzweil and Moravec at their word it might be necessary by the middle of this century.
What are the moral implications here? If we must move beyond Earth this quickly in order for the species to survive, who accepts the responsibility for the fate of those (most of us, after all) who are left behind? And even if we scatter to the stars, isn't it likely that we may take our problems with us or find, later, that they have followed us? The fate of our species on Earth and our fate in the galaxy seem inextricably linked.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Another idea is to erect a series of shields to defend against each of the dangerous technologies. The Strategic Defense Initiative, proposed by the Reagan administration, was an attempt to design such a shield against the threat of a nuclear attack from the Soviet Union. But as Arthur C. Clarke, who was privy to discussions about the project, observed: “Though it might be possible, at vast expense, to construct local defense systems that would 'only' let through a few percent of ballistic missiles, the much touted idea of a national umbrella was nonsense. Luis Alvarez, perhaps the greatest experimental physicist of this century, remarked to me that the advocates of such schemes were ‘very bright guys with no common sense.’” Clarke continued: “Looking into my often cloudy crystal ball, I suspect that a total defense might indeed be possible in a century or so. But the technology involved would produce, as a by-product, weapons so terrible that no one would bother with anything as primitive as ballistic missiles.” 10 In Engines of Creation , Eric Drexler proposed that we build an active nanotechnological shield—a form of immune system for the biosphere—to defend against dangerous replicators of all kinds that might escape from laboratories or otherwise be maliciously created. But the shield he proposed would itself be extremely dangerous—nothing could prevent it from developing autoimmune problems and attacking the biosphere itself.
11 Similar difficulties apply to the construction of shields against robotics and genetic engineering. These technologies are too powerful to be shielded against in the time frame of interest; even if it were possible to implement defensive shields, the side effects of their development would be at least as dangerous as the technologies we are trying to protect against.
These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.
Yes, I know, knowledge is good, as is the search for new truths. We have been seeking knowledge since ancient times. Aristotle opened his Metaphysics with the simple statement: “All men by nature desire to know.” We have, as a bedrock value in our society, long agreed on the value of open access to information, and recognize the problems that arise with attempts to restrict access to and development of knowledge. In recent times, we have come to revere scientific knowledge.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But despite the strong historical precedents, if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs.
It was Nietzsche who warned us, at the end of the 19th century, not only that God is dead but that “faith in science, which after all exists undeniably, cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the 'will to truth,' of 'truth at any price' is proved to it constantly.” It is this further danger that we now fully face—the consequences of our truth-seeking. The truth that science seeks can certainly be considered a dangerous substitute for God if it is likely to lead to our extinction.
If we could agree, as a species, what we wanted, where we were headed, and why, then we would make our future much less dangerous—then we might understand what we can and should relinquish. Otherwise, we can easily imagine an arms race developing over GNR technologies, as it did with the NBC technologies in the 20th century. This is perhaps the greatest risk, for once such a race begins, it's very hard to end it. This time—unlike during the Manhattan Project—we aren't in a war, facing an implacable enemy that is threatening our civilization; we are driven, instead, by our habits, our desires, our economic system, and our competitive need to know.
I believe that we all wish our course could be determined by our collective values, ethics, and morals. If we had gained more collective wisdom over the past few thousand years, then a dialogue to this end would be more practical, and the incredible powers we are about to unleash would not be nearly so troubling.
One would think we might be driven to such a dialogue by our instinct for self-preservation. Individuals clearly have this desire, yet as a species our behavior seems to be not in our favor. In dealing with the nuclear threat, we often spoke dishonestly to ourselves and to each other, thereby greatly increasing the risks. Whether this was politically motivated, or because we chose not to think ahead, or because when faced with such grave threats we acted irrationally out of fear, I do not know, but it does not bode well.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open, yet we seem hardly to have noticed. Ideas can't be put back in a box; unlike uranium or plutonium, they don't need to be mined and refined, and they can be freely copied. Once they are out, they are out. Churchill remarked, in a famous left-handed compliment, that the American people and their leaders “invariably do the right thing, after they have examined every other alternative.” In this case, however, we must act more presciently, as to do the right thing only at last may be to lose the chance to do it at all.
PHOTOGRAPHS CLOCKWISE FROM TOP LEFT: HULTON GETTY/LIAISON AGENCY; HULTON GETTY/LIAISON AGENC Y; HULTON GETTY/LIAISON AGENCY; PETER MENZEL; BURT GLINN/MAGNUM PHOTOS As Thoreau said, “We do not ride on the railroad; it rides upon us”; and this is what we must fight, in our time. The question is, indeed, Which is to be master? Will we survive our technologies? We are being propelled into this new century with no plan, no control, no brakes. Have we already gone too far down the path to alter course? I don't believe so, but we aren't trying yet, and the last chance to assert control—the fail-safe point—is rapidly approaching. We have our first pet robots, as well as commercially available genetic engineering techniques, and our nanoscale techniques are advancing rapidly. While the development of these technologies proceeds through a number of steps, it isn't necessarily the case—as happened in the Manhattan Project and the Trinity test—that the last step in proving a technology is large and hard. The breakthrough to wild self-replication in robotics, genetic engineering, or nanotechnology could come suddenly, reprising the surprise we felt when we learned of the cloning of a mammal.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg And yet I believe we do have a strong and solid basis for hope. Our attempts to deal with weapons of mass destruction in the last century provide a shining example of relinquishment for us to consider: the unilateral US abandonment, without preconditions, of the development of biological weapons. This relinquishment stemmed from the realization that while it would take an enormous effort to create these terrible weapons, they could from then on easily be duplicated and fall into the hands of rogue nations or terrorist groups.
The clear conclusion was that we would create additional threats to ourselves by pursuing these weapons, and that we would be more secure if we did not pursue them. We have embodied our relinquishment of biological and chemical weapons in the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC).
12 As for the continuing sizable threat from nuclear weapons, which we have lived with now for more than 50 years, the US Senate's recent rejection of the Comprehensive Test Ban Treaty makes it clear relinquishing nuclear weapons will not be politically easy. But we have a unique opportunity, with the end of the Cold War, to avert a multipolar arms race. Building on the BWC and CWC relinquishments, successful abolition of nuclear weapons could help us build toward a habit of relinquishing dangerous technologies. (Actually, by getting rid of all but 100 nuclear weapons worldwide—roughly the total destructive power of World War II and a considerably easier task—we could eliminate this extinction threat.
13 Verifying relinquishment will be a difficult problem, but not an unsolvable one. We are fortunate to have already done a lot of relevant work in the context of the BWC and other treaties. Our major task will be to apply this to technologies that are naturally much more commercial than military. The substantial need here is for transparency, as difficulty of verification is directly proportional to the difficulty of distinguishing relinquished from legitimate activities.
I frankly believe that the situation in 1945 was simpler than the one we now face: The nuclear technologies were reasonably separable into commercial and military uses, and monitoring was aided by the nature of atomic tests and the ease with which radioactivity could be measured. Research on military applications could be performed at national laboratories such as Los Alamos, with the results kept secret as long as possible.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The GNR technologies do not divide clearly into commercial and military uses; given their potential in the market, it's hard to imagine pursuing them only in national laboratories. With their widespread commercial pursuit, enforcing relinquishment will require a verification regime similar to that for biological weapons, but on an unprecedented scale. This, inevitably, will raise tensions between our individual privacy and desire for proprietary information, and the need for verification to protect us all. We will undoubtedly encounter strong resistance to this loss of privacy and freedom of action.
Verifying the relinquishment of certain GNR technologies will have to occur in cyberspace as well as at physical facilities. The critical issue will be to make the necessary transparency acceptable in a world of proprietary information, presumably by providing new forms of protection for intellectual property.
Verifying compliance will also require that scientists and engineers adopt a strong code of ethical conduct, resembling the Hippocratic oath, and that they have the courage to whistleblow as necessary, even at high personal cost. This would answer the call—50 years after Hiroshima—by the Nobel laureate Hans Bethe, one of the most senior of the surviving members of the Manhattan Project, that all scientists “cease and desist from work creating, developing, improving, and manufacturing nuclear weapons and other weapons of potential mass destruction.” 14 In the 21st century, this requires vigilance and personal responsibility by those who would work on both NBC and GNR technologies to avoid implementing weapons of mass destruction and knowledge-enabled mass destruction.
PHOTOGRAPHS CLOCKWISE FROM TOP LEFT: HULTON GETTY/LIAISON AGENCY; TODD YATES/BLACK STAR; STEPHEN BARAO/CUSTOM MEDICAL STOCK ; NOVOSTI/SIPA; PETE TURNER/IMAGE BANK ; INSET FROM LEFT: RAMAN/ACTION PRESS/SABA; HULTON GETTY/LIAISON AGENC Y Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Thoreau also said that we will be “rich in proportion to the number of things which we can afford to let alone.” We each seek to be happy, but it would seem worthwhile to question whether we need to take such a high risk of total destruction to gain yet more knowledge and yet more things; common sense says that there is a limit to our material needs—and that certain knowledge is too dangerous and is best forgone.
Neither should we pursue near immortality without considering the costs, without considering the commensurate increase in the risk of extinction. Immortality, while perhaps the original, is certainly not the only possible utopian dream.
I recently had the good fortune to meet the distinguished author and scholar Jacques Attali, whose book Lignes d'horizons ( Millennium , in the English translation) helped inspire the Java and Jini approach to the coming age of pervasive computing, as previously described in this magazine. In his new book Fraternités , Attali describes how our dreams of utopia have changed over time: “At the dawn of societies, men saw their passage on Earth as nothing more than a labyrinth of pain, at the end of which stood a door leading, via their death, to the company of gods and to Eternity.
With the Hebrews and then the Greeks, some men dared free themselves from theological demands and dream of an ideal City where Liberty would flourish. Others, noting the evolution of the market society, understood that the liberty of some would entail the alienation of others, and they sought Equality.
” Jacques helped me understand how these three different utopian goals exist in tension in our society today. He goes on to describe a fourth utopia, Fraternity , whose foundation is altruism. Fraternity alone associates individual happiness with the happiness of others, affording the promise of self-sustainment.
This crystallized for me my problem with Kurzweil's dream. A technological approach to Eternity—near immortality through robotics—may not be the most desirable utopia, and its pursuit brings clear dangers. Maybe we should rethink our utopian choices.
Where can we look for a new ethical basis to set our course? I have found the ideas in the book Ethics for the New Millennium , by the Dalai Lama, to be very helpful. As is perhaps well known but little heeded, the Dalai Lama argues that the most important thing is for us to conduct our lives with love and compassion for others, and that our societies need to develop a stronger notion of universal responsibility and of our interdependency; he proposes a standard of positive ethical conduct for individuals and societies that seems consonant with Attali's Fraternity utopia.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Dalai Lama further argues that we must understand what it is that makes people happy, and acknowledge the strong evidence that neither material progress nor the pursuit of the power of knowledge is the key—that there are limits to what science and the scientific pursuit alone can do.
Our Western notion of happiness seems to come from the Greeks, who defined it as “the exercise of vital powers along lines of excellence in a life affording them scope.” 15 Clearly, we need to find meaningful challenges and sufficient scope in our lives if we are to be happy in whatever is to come. But I believe we must find alternative outlets for our creative forces, beyond the culture of perpetual economic growth; this growth has largely been a blessing for several hundred years, but it has not brought us unalloyed happiness, and we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers.
PHOTOGRAPHS: LEFT: PETER MENZEL; CENTER, TOP TO BOTTOM: GALLERIA DELLʼACADEMIA, FLORENCE/CANALI PHOTOBANK , MILAN/SUPERSTOCK ; TED SPIEGEL/CORBIS; ROGER RESSMEYER/CORBIS; ROGER RESSMEYER/CORBIS; S. MILLER/CUSTOM MEDICAL STOCK ; RIGHT: EVERETT COLLECTION PHOTOGRAPH: AHMET SEL/SIPA Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It is now more than a year since my first encounter with Ray Kurzweil and John Searle. I see around me cause for hope in the voices for caution and relinquishment and in those people I have discovered who are as concerned as I am about our current predicament. I feel, too, a deepened sense of personal responsibility—not for the work I have already done, but for the work that I might yet do, at the confluence of the sciences.
But many other people who know about the dangers still seem strangely silent. When pressed, they trot out the “this is nothing new” riposte—as if awareness of what could happen is response enough. They tell me, There are universities filled with bioethicists who study this stuff all day long. They say, All this has been written about before, and by experts. They complain, Your worries and your arguments are already old hat.
I don't know where these people hide their fear. As an architect of complex systems I enter this arena as a generalist. But should this diminish my concerns? I am aware of how much has been written about, talked about, and lectured about so authoritatively. But does this mean it has reached people? Does this mean we can discount the dangers before us? Knowing is not a rationale for not acting. Can we doubt that knowledge has become a weapon we wield against ourselves? The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions.
My continuing professional work is on improving the reliability of software. Software is a tool, and as a toolbuilder I must struggle with the uses to which the tools I make are put. I have always believed that making software more reliable, given its many uses, will make the world a safer and better place; if I were to come to believe the opposite, then I would be morally obligated to stop this work. I can now imagine such a day may come.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg This all leaves me not angry but at least a bit melancholic. Henceforth, for me, progress will be somewhat bittersweet.
PHOTOGRAPHS CLOCKWISE FROM TOP: PETER MENZEL; UNITED ARTISTS CORPORATION/MOVIE STILL ARCHIVES; JOSEPH SOHM/CHROMOSOHM INC./CORBIS; DOUGLAS, MAHOWALD & MARTIN/SCIENCE PHOTO LIBRARY; S. MINAMIK AWA/SIPA; SIMON FRASER/SCIENCE PHOTO LIBRARY Do you remember the beautiful penultimate scene in Manhattan where Woody Allen is lying on his couch and talking into a tape recorder? He is writing a short story about people who are creating unnecessary, neurotic problems for themselves, because it keeps them from dealing with more unsolvable, terrifying problems about the universe.
He leads himself to the question, “Why is life worth living?” and to consider what makes it worthwhile for him: Groucho Marx, Willie Mays, the second movement of the Jupiter Symphony, Louis Armstrong's recording of “Potato Head Blues,” Swedish movies, Flaubert's Sentimental Education, Marlon Brando, Frank Sinatra, the apples and pears by Cézanne, the crabs at Sam Wo's, and, finally, the showstopper: his love Tracy's face.
Each of us has our precious things, and as we care for them we locate the essence of our humanity. In the end, it is because of our great capacity for caring that I remain optimistic we will confront the dangerous issues now before us.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg My immediate hope is to participate in a much larger discussion of the issues raised here, with people from many different backgrounds, in settings not predisposed to fear or favor technology for its own sake.
As a start, I have twice raised many of these issues at events sponsored by the Aspen Institute and have separately proposed that the American Academy of Arts and Sciences take them up as an extension of its work with the Pugwash Conferences. (These have been held since 1957 to discuss arms control, especially of nuclear weapons, and to formulate workable policies.) It's unfortunate that the Pugwash meetings started only well after the nuclear genie was out of the bottle—roughly 15 years too late. We are also getting a belated start on seriously addressing the issues around 21st-century technologies—the prevention of knowledge-enabled mass destruction—and further delay seems unacceptable.
So I'm still searching; there are many more things to learn. Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided. I'm up late again—it's almost 6 am. I'm trying to imagine some better answers, to break the spell and free them from the stone.
The passage Kurzweil quotes is from Kaczynski's Unabomber Manifesto, which was published jointly, under duress, by The New York Times and The Washington Post to attempt to bring his campaign of terror to an end. I agree with David Gelernter, who said about their decision: “It was a tough call for the newspapers. To say yes would be giving in to terrorism, and for all they knew he was lying anyway. On the other hand, to say yes might stop the killing. There was also a chance that someone would read the tract and get a hunch about the author; and that is exactly what happened. The suspect's brother read it, and it rang a bell.
“I would have told them not to publish. I'm glad they didn't ask me. I guess.” ( Drawing Life: Surviving the Unabomber.
Free Press, 1997: 120.) Garrett, Laurie.
The Coming Plague: Newly Emerging Diseases in a World Out of Balance.
Penguin, 1994: 47-52, 414, 419, 452.
Isaac Asimov described what became the most famous view of ethical rules for robot behavior in his book I, Robot in 1950, in his Three Laws of Robotics: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Michelangelo wrote a sonnet that begins: Non ha l' ottimo artista alcun concetto Ch' un marmo solo in sè non circonscriva Col suo soverchio; e solo a quello arriva La man che ubbidisce all' intelleto.
Stone translates this as: The best of artists hath no thought to show which the rough stone in its superfluous shell doth not include; to break the marble spell is all the hand that serves the brain can do.
Stone describes the process: “He was not working from his drawings or clay models; they had all been put away. He was carving from the images in his mind. His eyes and hands knew where every line, curve, mass must emerge, and at what depth in the heart of the stone to create the low relief.” ( The Agony and the Ecstasy.
Doubleday, 1961: 6, 144.) First Foresight Conference on Nanotechnology in October 1989, a talk titled “The Future of Computation.” Published in Crandall, B. C. and James Lewis, editors.
Nanotechnology: Research and Perspectives.
MIT Press, 1992: 269.
In his 1963 novel Cat's Cradle , Kurt Vonnegut imagined a gray-goo-like accident where a form of ice called ice-nine, which becomes solid at a much higher temperature, freezes the oceans.
Kauffman, Stuart. “Self-replication: Even Peptides Do It.” Nature, 382, August 8, 1996: 496.
Else, Jon.
The Day After Trinity: J. Robert Oppenheimer and The Atomic Bomb.
This estimate is in Leslie's book The End of the World: The Science and Ethics of Human Extinction , where he notes that the probability of extinction is substantially higher if we accept Brandon Carter's Doomsday Argument, which is, briefly, that “we ought to have some reluctance to believe that we are very exceptionally early, for instance in the earliest 0.001 percent, among all humans who will ever have lived. This would be some reason for thinking that humankind will not survive for many more centuries, let alone colonize the galaxy. Carter's doomsday argument doesn't generate any risk estimates just by itself. It is an argument for revising the estimates which we generate when we consider various possible dangers.” (Routledge, 1996: 1, 3, 145.) Clarke, Arthur C. “Presidents, Experts, and Asteroids.” Science , June 5, 1998. Reprinted as “Science and Society” in Greetings, Carbon-Based Bipeds! Collected Essays, 1934-1998.
St. Martin's Press, 1999: 526.
And, as David Forrest suggests in his paper “Regulating Nanotechnology Development,” “If we used strict liability as an alternative to regulation it would be impossible for any developer to internalize the cost of the risk (destruction of the biosphere), so theoretically the activity of developing nanotechnology should never be undertaken.” Forrest's analysis leaves us with only government regulation to protect us—not a comforting thought.
Meselson, Matthew. “The Problem of Biological Weapons.” Presentation to the 1,818th Stated Meeting of the American Academy of Arts and Sciences, January 13, 1999.
Doty, Paul. “The Forgotten Menace: Nuclear Weapons Stockpiles Still Represent the Biggest Threat to Civilization.” Nature , 402, December 9, 1999: 583.
See also Hans Bethe's 1997 letter to President Clinton.
Hamilton, Edith.
The Greek Way.
W. W. Norton & Co., 1942: 35.
Bill Joy, cofounder and Chief Scientist of Sun Microsystems, was cochair of the presidential commission on the future of IT research, and is coauthor of The Java Language Specification.
His work on the Jini pervasive computing technology was featured in Wired 6.08.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Topics future of work future opinion WIRED Classic longreads magazine-8.04 Andy Greenberg Brandi Collins-Dexter Angela Watercutter Lauren Smiley Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
796 | 2,022 | "Despite Big Layoffs, Tech Workers Are Still in Demand | WIRED" | "https://www.wired.com/story/despite-big-layoffs-meta-twitter-stripe-tech-workers-are-still-in-demand" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Amanda Hoover Business Despite Big Layoffs, Tech Workers Are Still in Demand Photograph: Flashpop/Getty Images Save this story Save Save this story Save Elon Musk’s decision to lay off half of Twitter’s workforce shook the tech industry last week. And the troubled platform isn’t the only well-known company downsizing. This week brought cuts at Salesforce and Meta, which eliminated 11,000 jobs , or 13 percent of its workforce. Snap, Lyft, and the payments company Stripe have also recently shrunk their payrolls, collectively shedding around 3,000 workers.
In total, more than 118,000 people have lost their jobs in tech this year, according to Layoffs.fyi , a site that tracks publicly reported job cuts in the industry. At the same time, companies including Amazon and Apple have slowed or frozen their hiring, reducing the number of open roles in Big Tech that can soak up people suddenly out of work. Yet while many individual workers must now find new jobs, the broader outlook for tech workers remains strong. Their skills are still in demand, and their peers have responded to recent cuts with a wave of grassroots support to help laid-off workers find new jobs.
Despite their command of the headlines, Big Tech companies are just one niche in the broader tech industry. Many smaller firms and companies in adjacent industries are still hiring tech workers, albeit at slower rates than tech giants recently did, and potentially for lower salaries. Some companies are now jumping at the chance to attract people previously monopolized by recruiters from the largest companies.
“These workers are at a huge advantage,” says Julia Pollak, chief economist with ZipRecruiter. “There is still strong demand for tech talent in a wide range of industries, from government to retail to agriculture. Those industries for the past years have been left in the dust.” The Big Tech forced exodus is also opening new opportunities for startups and investors aiming to create the next big thing. “To everyone affected by the Meta layoffs: Monomi Park is hiring,” Nick Popovich, CEO of an independent gaming studio, tweeted this week. Day One Ventures, a venture capital firm, responded to Big Tech’s cuts by launching an initiative aimed at laid-off workers offering to invest $100,000 in 20 different ideas for new companies. PitchBook, which tracks startup data, recently estimated that VCs have about $290 billion on hand to invest, suggesting there’s plenty of funding available for new entrepreneurs.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg X content This content can also be viewed on the site it originates from.
“You have two diverging paths for tech workers,” says Pollak. “One group is taking a flight-to-safety approach and going to companies and industries that are recession-resistant. And another group will throw caution to the wind and take a big risk and start their own companies.” Overall, the job market for tech talent remains strong. In August, the unemployment rate for tech occupations in the US stood at 2.3 percent, according to the Computing Technology Industry Association, significantly lower than the US unemployment rate of 3.7 percent that month, which is itself low by historical standards. There are an estimated 8.7 million tech workers in the US, according to numbers CompTIA released earlier this year.
At least some of the recent layoffs are less a symptom of a major turn in the economy and more a response to over-hiring by tech companies during the unexpected boom they experienced during the Covid-19 pandemic.
“We were much too optimistic about the internet economy’s near-term growth in 2022 and 2023,” Patrick Collison, Stripe’s CEO, said in a memo to staff about the company’s layoffs. Mark Zuckerberg cited his own misreading of the pandemic internet surge in his memo to staff about Meta's job cuts. “Many people predicted this would be a permanent acceleration that would continue even after the pandemic ended,” he wrote. “I got this wrong, and I take responsibility for that.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The hiring pauses at companies like Amazon, Apple, and Alphabet can also be seen as signs of sober restraint, not a major crisis, says Rucha Vankudre, senior economist with the labor market analytics firm Lightcast. “Everyone is looking ahead, seeing prices are up, and trying to cut costs,” says Vankudre. Companies, she says, are “trying to be more measured.” The bigger picture may not be that reassuring to individuals who were laid off and are now scrambling to quickly find new employment or risk losing work visas. But technology workers have a tradition of banding together to help fellow techies after layoffs.
“There is a general identity of being a tech worker,” says Nataliya Nedzhvetskaya, a grad student researching sociology and employee activism at UC Berkeley and a member of Collective Action in Tech, a volunteer-run project to unite tech workers. “There’s a precedent in this industry for sharing information [and] a culture that values transparency.” Tech workers trying to soften the impact of layoffs have formed groups on LinkedIn for workers recently let go by Meta. They started and circulated a Google sheet with potential opportunities. And they’re elevating each other’s posts on LinkedIn and other social platforms to boost their audiences and catch the eye of managers who are hiring, not cutting staff.
There's also some institutional support. Coda, an online documents startup, hosts company alumni lists that laid-off workers can add their details to. Collective Action in Tech published a guide for those laid off at Twitter, including tips to help workers to understand their rights and how to communicate securely, for example by using Signal.
That swell of support is helping some tech workers stay calm even as headlines about layoffs pile up. “There’s a lot of uncertainty, and people are acknowledging that there’s going to be a lot of fluctuation,” says Nedzhvetskaya. Yet while people are understandably anxious about job losses, she says, she doesn’t see a “full-fledged panic.” You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Staff Writer X Topics Silicon Valley Startups Venture Capital Work Jobs Economy Amanda Hoover Caitlin Harrington Niamh Rowe Amanda Hoover Vittoria Elliott Amanda Hoover Susan D'Agostino Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
797 | 2,022 | "AI Art Is Challenging the Boundaries of Curation | WIRED" | "https://www.wired.com/story/dalle-art-curation-artificial-intelligence" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Raphaël Millière Ideas AI Art Is Challenging the Boundaries of Curation Photo-illustration: Jacqui VanLiew; DALL-E 2 Save this story Save Save this story Save Application Text analysis End User Consumer Sector Entertainment Source Data Images Text Technology Machine vision Neural Network In just a few years, the number of artworks produced by self-described AI artists has dramatically increased. Some of these works have been sold by large auction houses for dizzying prices and have found their way into prestigious curated collections.
Initially spearheaded by a few technologically knowledgeable artists who adopted computer programming as part of their creative process, AI art has recently been embraced by the masses, as image generation technology has become both more effective and easier to use without coding skills.
The AI art movement rides on the coattails of technical progress in computer vision, a research area dedicated to designing algorithms that can process meaningful visual information. A subclass of computer vision algorithms, called generative models, occupies center stage in this story. Generative models are artificial neural networks that can be “trained” on large datasets containing millions of images and learn to encode their statistically salient features. After training, they can produce completely new images that are not contained in the original dataset, often guided by text prompts that explicitly describe the desired results. Until recently, images produced through this approach remained somewhat lacking in coherence or detail, although they possessed an undeniable surrealist charm that captured the attention of many serious artists. However, earlier this year the tech company Open AI unveiled a new model— nicknamed DALL·E 2 —that can generate remarkably consistent and relevant images from virtually any text prompt. DALL·E 2 can even produce images in specific styles and imitate famous artists rather convincingly, as long as the desired effect is adequately specified in the prompt. A similar tool has been released for free to the public under the name Craiyon (formerly “DALL·E mini”).
The coming-of-age of AI art raises a number of interesting questions, some of which—such as whether AI art is really art , and if so, to what extent it is really made by AI —are not particularly original. These questions echo similar worries once raised by the invention of photography. By merely pressing a button on a camera, someone without painting skills could suddenly capture a realistic depiction of a scene. Today, a person can press a virtual button to run a generative model and produce images of virtually any scene in any style. But cameras and algorithms do not make art. People do. AI art is art, made by human artists who use algorithms as yet another tool in their creative arsenal. While both technologies have lowered the barrier to entry for artistic creation— which calls for celebration rather than concern—one should not underestimate the amount of skill, talent, and intentionality involved in making interesting artworks.
Like any novel tool, generative models introduce significant changes in the process of art-making. In particular, AI art expands the multifaceted notion of curation and continues to blur the line between curation and creation.
There are at least three ways in which making art with AI can involve curatorial acts. The first, and least original, has to do with the curation of outputs. Any generative algorithm can produce an indefinite number of images, but not all of these will typically be conferred artistic status. The process of curating outputs is very familiar to photographers, some of whom routinely capture hundreds or thousands of shots from which a few, if any, might be carefully selected for display. Unlike painters and sculptors, photographers and AI artists have to deal with an abundance of (digital) objects, whose curation is part and parcel of the artistic process. In AI research at large, the act of “cherry-picking” particularly good outputs is seen as bad scientific practice, a way to misleadingly inflate the perceived performance of a model. When it comes to AI art, however, cherry-picking can be the name of the game. The artist’s intentions and artistic sensibility may be expressed in the very act of promoting specific outputs to the status of artworks.
Second, curation may also happen before any images are generated. In fact, while “curation” applied to art generally refers to the process of selecting existing work for display, curation in AI research colloquially refers to the work that goes into crafting a dataset on which to train an artificial neural network. This work is crucial, because if a dataset is poorly designed, the network will often fail to learn how to represent desired features and perform adequately. Furthermore, if a dataset is biased, the network will tend to reproduce, or even amplify, such bias—including, for example, harmful stereotypes. As the saying goes, “garbage in, garbage out.” The adage holds true for AI art, too, except “garbage” takes on an aesthetic (and subjective) dimension.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For his work Memories of Passersby I (2018), German artist Mario Kinglemann, one of the pioneers of AI art, carefully curated a dataset of thousands of portraits from the 17th to 19th centuries. He then used this dataset to train generative algorithms that could produce an infinite stream of novel portraits sharing similar aesthetic characteristics, displayed in real time on two screens (one for female portraits, one for male portraits). This is an example of an AI artwork that does not involve output curation. Still, the meticulous curation of the training data played a fundamental role in its conception. Here, “bias” is a blessing: The dataset was heavily biased according to the artist’s personal aesthetic preferences and taste, and this aesthetic bias is reflected in the final artwork, albeit through the distorting lens of the computer-driven generative process.
Another novelty spurred by the recent progress of generative algorithms is the ability to produce images by describing the desired result in natural language. This has come to be known as “prompting,” or guiding the algorithm with text prompts as opposed to sampling random outputs. Consider the illustration accompanying this article: The collage features several images generated by prompting DALL·E 2 with the phrases "an AI image generation algorithm, conceptual art," "collage with images made by a generative AI model, illustration from Wired magazine," and "an artist curating artworks produced with an AI algorithm, conceptual art." In some ways, being able to prompt a generative algorithm with words makes the creative process both easier and more focused. It may reduce the need for the curation of outputs, as one can directly describe one’s vision. However, prompting is not a silver bullet that trivializes artistic creation. It is more akin to a new kind of creative skill. AI researchers even talk about “prompt engineering” to describe the process of crafting good prompts to obtain desired results.
Prompt engineering is more of an art than a science, especially when it comes to creative uses of AI. It has even been compared to alchemy, or incantation. In addition to having a unique vision for the final products, one must get a feel for the right combination of magic words that will unlock specific styles or subjects with any given algorithm. Therein lies the third and perhaps most novel form of curation introduced by AI art: carefully designing and collecting personal prompts, or prompt fragments, that elicit desired results from an algorithm.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As the use of pre-trained algorithms like DALL·E 2 starts to obviate the need for dataset curation, prompt curation offers an alternative way of developing a personal artistic style. Interestingly, it also places images in dialog with text, as traditional museum curation does, although in a less academic and often more poetic format. Like art commentary, prompts can be very literal (“A man standing in a corn field, low angle, 35-mm portrait photography”) or very abstract (“The unbearable lightness of being”). Either way, prompts impose a novel layer of interpretation on artworks. Some artists like to share their prompts and may even use them as titles for their works; others prefer to keep them to themselves and leave the resulting images open to interpretation.
The curation of prompts and the curation of outputs often become entwined in a creative feedback loop. One might try out a given prompt, get a sense of the images it can produce, then use that new knowledge to iteratively refine the prompt, picking out interesting outputs in the process. This cycle can be repeated over and over, ad infinitum. It is reminiscent of traditional artists exploring variations on a common theme, such as Picasso’s lithograph series The Bull (1945), in which he depicted a bull at various stages of abstraction. One noteworthy difference is that the quasi-alchemic prompting procedure always involves an element of surprise guaranteed by the stochastic nature of generation: No prompt will produce the exact same result twice, and slight variations in the prompt may have an unexpectedly large impact on the outputs.
The blurring of boundaries between artists and curators is not new. While curation was initially seen as a merely custodial endeavor, tasked with preserving and displaying a catalog of artworks in a museum, since the 1960s it has come to be recognized as a creative gesture in itself. Curating an exhibition often involves deliberately adopting a particular concept or perspective to shine a new light on a set of artworks. Star curators such as Carolyn Christov-Bakargiev and Hans Ulrich Obrist approach their work like artists and have had an influential role in shaping contemporary discourse about art and curation. Conversely, artists such as Marcel Duchamp curated iconic events themselves and played a pivotal role in modernizing the exhibition medium. As a creative process in its own right, curation can become a deeply personal expression of artistic taste. The progress of generative algorithms creates additional opportunities for cross-pollination between art and curation by introducing new curatorial gestures that channel the artist’s aesthetic sensibilities at several stages of the creative process.
These curatorial aspects of AI art may eventually percolate through curatorial practices in museums or digital exhibitions. For example, institutions exhibiting AI art will need to decide how much information to provide about the datasets on which algorithms used to produce specific artworks were trained. Sotheby’s catalog note for Memories of Passerby I mentions that the training dataset contained 17th- to 19th-century portraits, which provides relevant context to understand the artwork and its art historical lineage. If a prompt was used to produce a piece and was communicated by the artist, curators may decide to include and reflect on it in their presentation. In line with the idea of the curator as (AI) artist, one could also conceive of an exhibition in which traditional artworks are selected on the basis of the similarity of the captions an algorithm assigns to them (see Google Arts & Culture for similar experiments in digital curation). One thing is certain: Technological innovations from AI research will continue influencing artistic creation and curation in exciting and unpredictable ways that provide fertile ground for novel forms of creativity.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars X Topics artificial intelligence art machine learning deep learning Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
798 | 2,023 | "Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech - The Verge" | "https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Google / Artificial Intelligence Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech / Nearly three years after the company was called out, it hasn’t gone beyond a quick workaround By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story Back in 2015, software engineer Jacky Alciné pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem. But, as a new report from Wired shows, nearly three years on and Google hasn’t really fixed anything. The company has simply blocked its image recognition algorithms from identifying gorillas altogether — preferring, presumably, to limit the service rather than risk another miscategorization.
Wired says it performed a number of tests on Google Photos’ algorithm, uploading tens of thousands of pictures of various primates to the service. Baboons, gibbons, and marmosets were all correctly identified, but gorillas and chimpanzees were not. The publication also found that Google had restricted its AI recognition in other racial categories. Searching for “black man” or “black woman,” for example, only returned pictures of people in black and white, sorted by gender but not race.
A spokesperson for Google confirmed to Wired that the image categories “gorilla,” “chimp,” “chimpanzee,” and “monkey” remained blocked on Google Photos after Alciné’s tweet in 2015. “Image labeling technology is still early and unfortunately it’s nowhere near perfect,” said the rep. The categories are still available on other Google services, though, including the Cloud Vision API it sells to other companies and Google Assistant.
It may seem strange that Google, a company that’s generally seen as the forerunner in commercial AI, was not able to come up with a more complete solution to this error. But it’s a good reminder of how difficult it can be to train AI software to be consistent and robust. Especially (as one might suppose happened in the case of the Google Photos mistake) when that software is not trained and tested by a diverse group of people.
It’s not clear in this case whether the Google Photos algorithm remains restricted in this way because Google couldn’t fix the problem, didn’t want to dedicate the resources to do so, or is simply showing an overabundance of caution. But it’s clear that incidents like this, which reveal the often insular Silicon Valley culture that has tasked itself with building world-spanning algorithms, need more than quick fixes.
Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech The latest AI copyright lawsuit involves Mike Huckabee and his books Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
799 | 2,020 | "A.I. Helped Uncover Chinese Boats Hiding in North Korean Waters | WIRED" | "https://www.wired.com/story/ai-helped-uncover-chinese-boats-hiding-in-north-korean-waters" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Yael Grauer, WIRED UK Security A.I. Helped Uncover Chinese Boats Hiding in North Korean Waters A new study details how more than 900 vessels of Chinese origin likely caught more than 160,000 metric tons of Pacific flying squid over two years.
Photographer: Chu Yang/Alamy Save this story Save Save this story Save End User Government Research Sector Public safety Research Source Data Geolocation Sensors Images Technology Machine learning Machine vision Huge fleets of Chinese fishing boats have been caught stealthily operating in North Korean waters—while having their tracking systems turned off. The potentially illegal fishing operation was revealed through a combination of artificial intelligence, radar, and satellite data.
This story originally appeared on WIRED UK.
A study published today in the journal Science Advances details how more than 900 vessels of Chinese origin (over 900 in 2017 and over 700 in 2018) likely caught more than 160,000 metric tons—close to half a billion dollars’ worth—of Pacific flying squid over two years. This may be in violation of United Nations sanctions, which began restricting North Korea from foreign fishing in September 2017 following the country’s ballistic missile tests.
Illegal fishing threatens fish stocks and maritime ecosystems, and it can jeopardize food security for legitimate fishers. However, the practice is difficult to monitor because of so-called dark fleets—boats that don’t appear on monitoring systems. Even if the vessels are operating legally and broadcasting their positions on the monitoring systems mandated by their country, that data is sometimes hidden from the public, limiting transparency and accountability.
In the study, scientists from South Korea, Japan, Australia, and the United States combined four different technologies to piece together information about the fleets, some of which may show up using one tool but not another. These include automatic identification system (AIS), radar images, infrared imaging, and high-res optical images.
AIS is a tracking system, much like GPS, that uses transponders to send the vessel’s location at sea. Although it provides detailed movement information, only a fraction of vessels that use GPS broadcast their positions. “Most of the vessels operating do not use this and are ‘dark,’ meaning they don’t appear in public surveillance systems, and the ones that did broadcast did so relatively infrequently,” says David Kroosdma, director of research and innovation at the international nonprofit Global Fishing Watch and coauthor of the study. The ones that did broadcast AIS all originated from Chinese ports and fished in Chinese water.
To track down the vessels, this AIS data was supplemented with satellite synthetic aperture radar images—or, more simply put, pictures of boats taken from space. The satellite imagery penetrates clouds and allows researchers to identify large metal vessels, but it doesn’t regularly cover all oceans.
Visible infrared imaging radiometer suite, or VIIRS, was also used. This collects global nighttime satellite imagery. It can detect vessels that use bright lights, in this case to lure squid to the surface. However, image clarity is limited by clouds. And finally, while high-resolution optical imagery can provide visual confirmation of vessel types and their activity, it is limited by clouds and is often not available at a high enough resolution, or frequently enough, to monitor fishing fleets in some sea zones.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The researchers trained a convolutional neural network to identify pair trawlers, which have a distinctive fishing pattern and comprise the largest portion of foreign vessels in the region. They used the neural network to identify the location of the fleet, and then used satellite imagery to further verify the vessels they identified as pair trawlers, and to verify the location and size of the fleet. They also used the technology to identify 3,000 smaller artisanal wooden vessels with dimmer lights, which are believed to be a North Korean fleet fishing in Russian waters in 2018.
Using a wide variety of tools allowed researchers to illuminate activities that were previously out of sight, gaining a better understanding of fishing vessels than has ever been done at this scale. Global Fishing Watch says the breakthrough could signal the start of new era in ocean management, one where it’s easier to detect illegal fishing operations.
Dana Goward, president of the Resilient Navigation Foundation and a former maritime navigation authority for the US, points out that circumventing tracking isn’t a new thing. “Almost as long as there has been GPS, folks have used it to track the activity of others, and others have found ways to defeat them. The problem with illegal fishing has been a particularly onerous one, especially because it allows vessels of one nation to essentially steal the resources of another,” he says. “It's a long-standing problem.” Illegal fishing can make it difficult to manage fish stocks to get the most value out of them and to protect the ecosystem. “If you don't know how much you're catching, and you don't know who's catching what, there's no way for people to sit down at the table and agree on how to manage it,” Kroosdma says.
South Korea sets a total allowable catch for squid, bans pair trawling, and permits fewer than 40 small trawlers. It also limits the lighting power of squid jiggers. But the likely Chinese fleet uses pair trawling, a greater number of vessels, and brighter lighting power to target the same stock. According to study coauthor Jungam Lee of the Korean Maritime Institute, competition from Chinese trawlers is likely forcing North Korean fishers—who use small wooden boats ill equipped for long-distance travel—into neighboring Russian waters.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The researchers estimated that the number of fishing days for these small vessels has increased from 39,000 in 2015 to 222,000 in 2018. The study states that about 3,000 North Korean vessels fished mostly illegally in Russian waters in 2018. This has led to North Korean boats washing ashore on Japanese coasts, in incidents that frequently involve starvation and death.
Since Pacific flying squid are found straddling the boundaries between South Korean, North Korean, Russian, and Japanese waters, Kroosdma says it’s important for all countries to come to an agreement on how to manage the fish stock—and one way to do that is with transparency. “There's no way you're going to be managing that well without good information on how much people are catching,” he says.
Reported catches have dropped by 80 percent in South Korean waters and 82 percent in Japanese waters since 2003. Researchers have continued to look at fishing activity since the study was completed. Fishing activity in 2019 was higher than 2018 but slightly lower than 2017, according to Kroosdma. Although the season peaks in September and October, they’ve already seen almost 450 Chinese pair trawlers in North Korean waters so far this year.
This story originally appeared on WIRED UK.
The country is reopening.
I’m still on lockdown How to use Slack without driving your coworkers crazy Covid-19 is accelerating human transformation— let’s not waste it Here's how to properly read an election poll DoNotPay unsubscribes you from spam— and tries to get you paid 👁 Prepare for AI to produce less wizardry.
Plus: Get the latest AI news 🎙️ Listen to Get WIRED , our new podcast about how the future is realized. Catch the latest episodes and subscribe to the 📩 newsletter to keep up with all our shows ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Topics Wired UK north korea China artificial intelligence satellites oceans Andy Greenberg Lily Hay Newman David Gilbert Dell Cameron Andy Greenberg Reece Rogers Matt Burgess Lily Hay Newman Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
800 | 2,023 | "Welcome to the Wet Hot AI Chatbot Summer | WIRED" | "https://www.wired.com/story/plaintext-welcome-to-the-wet-hot-ai-chatbot-summer" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Welcome to the Wet Hot AI Chatbot Summer A woman visits the contemporary art exhibition "Machine Memories: Space" in Istanbul, Turkey, in 2021. The exhibition was created by using artificial intelligence AI and images recorded by space telescopes.
Photograph: Osman Orsal/Xinhua News Agency/Getty Images Save this story Save Save this story Save Late last year, I attended an event hosted by Google to celebrate its AI advances. The company’s domain in New York’s Chelsea neighborhood now extends literally onto the Hudson River , and about a hundred of us gathered in a pierside exhibition space to watch scripted presentations from executives and demos of the latest advances. Speaking remotely from the West Coast, the company’s high priest of computation, Jeff Dean , promised “a hopeful vision for the future.” The theme of the day was “exploring the (im)possible.” We learned how Google’s AI was being put to use fighting wildfires, forecasting floods, and assessing retinal disease. But the stars of this show were what Google called “generative AI models.” These are the content machines, schooled on massive training sets of data, designed to churn out writings, images, and even computer code that once only humans could hope to produce.
Something weird is happening in the world of AI. In the early part of this century, the field burst out of a lethargy—known as an AI winter—by the innovation of “deep learning” led by three academics.
This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve spent a dozen years in this AI springtime. But in the past year or so there has been a dramatic aftershock to that earthquake as a sudden profusion of mind-bending generative models have appeared.
Most of the toys Google demoed on the pier in New York showed the fruits of generative models like its flagship large language model, called LaMDA. It can answer questions and work with creative writers to make stories. Other projects can produce 3D images from text prompts or even help to produce videos by cranking out storyboard-like suggestions on a scene-by-scene basis. But a big piece of the program dealt with some of the ethical issues and potential dangers of unleashing robot content generators on the world. The company took pains to emphasize how it was proceeding cautiously in employing its powerful creations. The most telling statement came from Douglas Eck, a principal scientist at Google Research. “Generative AI models are powerful—there’s no doubt about that,” he said. “But we also have to acknowledge the real risks that this technology can pose if we don’t take care, which is why we’ve been slow to release them. And I’m proud we’ve been slow to release them.” But Google’s competitors don’t seem to have “slow” in their vocabularies. While Google has provided limited access to LaMDA in a protected Test Kitchen app, other companies have been offering an all-you-can-eat smorgasbord with their own chatbots and image generators. Only a few weeks after the Google event came the most consequential release yet: OpenAI’s latest version of its own powerful text generation technology, ChatGPT , a lightning-fast, logorrheic gadfly that spits out coherent essays, poems, plays, songs, and even obituaries at the merest hint of a prompt. Taking advantage of the chatbot’s wide availability, millions of people have tinkered with it and shared its amazing responses, to the point where it’s become an international obsession, as well as a source of wonder and fear.
Will ChatGPT kill the college essay ? Destroy traditional internet search ? Put millions of copywriters, journalists, artists, songwriters, and legal assistants out of a job ? Answers to those questions aren’t clear right now. But one thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. Contrary to Mark Zuckerberg’s belief, the next big paradigm isn’t the metaverse—it’s this new wave of AI content engines, and it’s here now. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. They may not be anywhere as good as the innovative creations of talented human beings, but the robots will quantitatively dominate.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After ChatGPT became a blockbuster, some people laughed at Google for its apparent naiveté in slow-walking its products to market. But I think Google’s original instinct to slow things down has merit. There are zillions of unresolved issues involved in opening the dam to a tidal wave of AI content. It’s imperative that we start dealing with those, ideally before the technology becomes ubiquitous. “We know it's going to be transformative,” says Google’s VP of research Zoubin Ghahramani. “So what can we do as a company, as a society, to make sure that the transformative bits that are good for society are the ones that move forward faster than the ones that are damaging?” Let’s consider just one issue: What, if anything, should limit the output of those engines? Google’s SVP of technology and society, James Manyika, explained to me that one reason for holding back a mass release of LaMDA is the time-consuming effort to set limits on what comes out of the bot’s mouth. “When you prompt it, what you're getting from it isn't the first thing LaMDA came up with,” he says. “We're looking at the output before we present it back to you to say, is it safe? ” He further explains that Google winds up defining “safe” by using human moderators to identify what’s proper and then putting those standards into code.
Laudable intentions, to be sure. But in the long run, setting boundaries might be futile—if they are easily circumvented—or even counterproductive. It might seem like a good idea to forbid a language model to express certain ideas, like Covid misinformation or racial animus. But you could also imagine an authoritarian regime rigging a system to prevent any statement that might express doubt about the infallibility of its leaders. It could be that designing easy-to-implement guardrails could be a blueprint for creating propaganda machines. By the way, former Google engineer Blake Lemoine—the guy who thinks that LaMDA is sentient —is predictably against imposing such limits on bots. “You have some purpose in creating the person [bot] in the first place, but once they exist they’re their own person and an end in and of themselves,” he told me in a Twitter DM.
Now that the chatbots are out of their sandboxes, we’ll have to argue all those issues after the fact. Also you can expect Google’s own generative progeny to soon burst out of their test kitchens. Its scientists consider LaMDA best in class, but the company is miffed that it’s second rate in terms of buzz. Reports are that Google has declared an internal Code Red alert to respond to what is now a competitive emergency.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Ideally, Google will fast-track LaMDA while maintaining the same caution that let OpenAI zip past it in the chatbot war, but that may be (im)possible.
In December 2010 I wrote an introduction to a WIRED package about the “ AI Revolution, ” taking note of how artificial intelligence had officially passed out of its winter and wondering what came next. One thing I got right: It’s too late to stop it.
We must learn to adapt. AI is so crucial to some systems—like the financial infrastructure—that getting rid of it would be a lot harder than simply disconnecting HAL 9000's modules. "In some sense, you can argue that the science fiction scenario is already starting to happen," Thinking Machines' [Danny] Hillis says. "The computers are in control, and we just live in their world." [Stephen] Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. "Do you regulate an underlying algorithm?" he asks. "That's crazy, because you can't foresee in most cases what consequences that algorithm will have." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In its earlier days, artificial intelligence was weighted with controversy and grave doubt, as humanists feared the ramifications of thinking machines. Now the machines are embedded in our lives, and those fears seem irrelevant. "I used to have fights about it," [Rodney] Brooks says. "I've stopped having fights. I'm just trying to win." Glenn writes, “Is anyone interested in the Facebook trend of suspending active accounts and then demanding mobile numbers, selfies, and multiple other personal info before an appeal can start? Facebook insists all of my messages, photos, posts, etc. will be deleted in 20 days unless I give them my personal information.” Hi, Glenn. I don’t want to get into the weeds of whether your account was suspended or hacked or whatever—let alone whether it’s a trendy thing among the Metas. But I can offer some general advice about providing information from anyone claiming to be Facebook, or any other service for that matter: Be careful. It could be someone trying to phish you. Facebook has a help page that tells you how to verify whether a message is valid. (And yes, sometimes the company does ask for a selfie if it thinks that you’ve been hacked.) But please note that the company also says one of the telltale signs of an attack is “warnings that something will happen to your account if you don’t update it or take a certain action.” Isn’t that what you are describing? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That said, users are generally screwed, because it’s sometimes hard to distinguish legitimate requests for information from attacks. And all too often, tech companies leave customers at the mercy of malfeasants. I am continually astonished at the persistence of certain attacks on users of Facebook and Messenger. There are several in particular that have been going on for years. In one, someone fakes an account of a person you know and asks to friend you. You forget that you are already friends with that person and agree. From that point on, the impersonator has a higher level of access to your content. Another hack involves someone getting hold of a friend’s account and sending you a message with what looks like a link to a video that you might want to watch. The link is toxic. Savvy users won’t fall for this, but why can’t the company nip these in the bud? Is there some esoteric computer science reason that Meta hasn’t been able to recognize these easily identifiable attacks? Or is it just a lower priority than optimizing ads or building the metaverse? You can submit questions to [email protected].
Write ASK LEVY in the subject line.
Let me reach back into December for my favorite apocalyptic moment of 2022: the bursting of the AquaDom , the world’s largest fish tank, leaving 1,500 instances of rare marine life in flopping death throes on the freezing streets of Berlin.
To quote The New York Times , “The entire block of the street outside the building remained soaked by 264,000 gallons of water that rushed out of the lobby, uprooting plants and ripping out telephones that lay strewn among hundreds of chocolate balls from a neighboring Lindt chocolate shop.” Can you top that, 2023? WIRED is at CES, so you can skip Las Vegas and enjoy the bomb cyclones and atmospheric rivers at home. Catch up on the latest gadgets here.
It’s time to pay kidney donors.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The ultimate David-and-Goliath battle: A Guyanese lawyer fights Exxon and her country’s government to stop offshore drilling.
The Big Bang. Quantum physics. Invention of the chip. And Allison Williams.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Editor at Large X Topics Plaintext Peter Guest Khari Johnson Will Knight WIRED Staff Khari Johnson Will Knight Matt Burgess Steven Levy Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
801 | 2,023 | "ChatGPT Is Coming for Classrooms. Don't Panic | WIRED" | "https://www.wired.com/story/chatgpt-is-coming-for-classrooms-dont-panic" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Pia Ceres Business ChatGPT Is Coming for Classrooms. Don't Panic Photograph: AAron Ontiveroz/The Denver Post/Getty Images Save this story Save Save this story Save When high school English teacher Kelly Gibson first encountered ChatGPT in December, the existential anxiety kicked in fast. While the internet delighted in the chatbot’s superficially sophisticated answers to users’ prompts, many educators were less amused. If anyone could ask ChatGPT to “write 300 words on what the green light symbolizes in The Great Gatsby ,” what would stop students from feeding their homework to the bot? Speculation swirled about a new era of rampant cheating and even a death knell for essays, or education itself. “I thought, ‘Oh my god, this is literally what I teach,’” Gibson says.
But amid the panic, some enterprising teachers see ChatGPT as an opportunity to redesign what learning looks like—and what they invent could shape the future of the classroom. Gibson is one of them. After her initial alarm subsided, she spent her winter vacation tinkering with ChatGPT and figuring out ways to incorporate it into her lessons. She might ask kids to generate text using the bot and then edit it themselves to find the chatbot’s errors or improve upon its writing style. Gibson, who has been teaching for 25 years, likened it to more familiar tech tools that enhance, not replace, learning and critical thinking. “I don’t know how to do it well yet, but I want AI chatbots to become like calculators for writing,” she says.
Gibson’s view of ChatGPT as a teaching tool, not the perfect cheat, brings up a crucial point: ChatGPT is not intelligent in the way people are, despite its ability to spew humanlike text. It is a statistical machine that can sometimes regurgitate or create falsehoods and often needs guidance and further edits to get things right.
Despite those limitations, Gibson also believes she has a responsibility to bring ChatGPT into the classroom. She teaches in a predominantly white, rural, low-income area of Oregon. If just the students who have ready access to internet-connected devices at home can gain experience with the bot, it could widen the digital divide and further disadvantage students who don’t have access. So Gibson figured she was in a position to turn ChatGPT into, to use educator-speak, a teachable moment for all of her students.
Other educators who reject the notion of an educational apocalypse suggest that ChatGPT might not be breaking education at all, but bringing attention to how the system is already broken. “Another way of thinking about this is not how do you find new forms of assessment, but what are our priorities in further education at the moment? And perhaps they’re a little bit broken,” says Alex Taylor, who researches and teaches human-computer interaction at City, University of London.
Taylor says the bot has prompted discussions with colleagues about the future of testing and assessment. If a series of factual questions on a test can be answered by a chatbot, was the test a worthwhile measure of learning anyway? In Taylor’s view, the kind of rote questions that could be answered by a chatbot don’t prompt the kind of learning that would make his students better thinkers. “I think sometimes we’ve got it back to front,” he says. “We’re just like, ‘How can we test the hell out of people to meet some level of performance or some metric?’ Whereas, actually, education should be about a much more expansive idea.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Olya Kudina has used ChatGPT as a tool in her own classroom at Delft University of Technology in the Netherlands, where she teaches graduate and undergraduate courses on AI and ethics. In December she gave her undergrads a debate-style assignment using ChatGPT. Groups of students first presented three arguments and two counterarguments, supported with academic references, to the class without AI assistance. Next they fed the same assignment to their choice of either ChatGPT or its predecessor GPT-3, then compared the chatbot’s answer with their own organically made text.
The students were dazzled by how quickly the chatbot rendered information into fluid prose—until they read it with a closer eye. The chatbot was fudging facts. When students asked it to back up an argument with citations from scholarly texts, it misattributed work to the wrong authors. And its arguments could be circular and illogical. Kudina’s students concluded that, contrary to fears of a cheating epidemic, copying from ChatGPT wouldn’t actually net them a good grade.
Kudina says that teachers should neither ban ChatGPT nor embrace the technology without question. She advocates for her profession to “critically appropriate” the technology and find more creative ways to collaborate with it. For example, students might use the chatbot to spark new ideas or arguments. (One of her students likened ChatGPT to a superpowered Google search.) Kudina thinks ChatGPT might also spur educators to get more creative with assignments, for example by designing them to draw from students’ personal experiences, information that ChatGPT couldn’t have picked up from its training data.
That’s not to say ChatGPT won’t be at all disruptive to education. The bot emerged at a time when many teachers are experiencing burnout after emergency remote learning during the pandemic. Now another technological phenomenon threatens to upend their entire approach to teaching, creating more work. And the student privacy implications of ChatGPT, particularly at the K–12 level, are unclear. OpenAI does collect some data on users and says it reviews conversations with ChatGPT; the company’s terms of service state that users must be 18 or older, although the bot doesn’t attempt to verify age.
Completely barring ChatGPT from classrooms, tempting as that may be, could introduce a host of new problems. Torrey Trust at the University of Massachusetts Amherst studies how teachers use technology to reshape learning. She points out that reverting to analog forms of assessment, like oral exams, can put students with disabilities at a disadvantage. And outright bans on AI tools could cement a culture of distrust. “It’s going to be harder for students to learn in an environment where a teacher is trying to catch them cheating,” says Trust. “It shifts the focus from learning to just trying to get a good grade.” In January, at the start of the new semester, the New York City public schools banned ChatGPT on school devices and networks due to “concerns about negative impacts on student learning and concerns regarding the safety and accuracy of content,” a spokesperson told Chalkbeat.
Marilyn Ramirez, who teaches high school English in Washington Heights in New York, says that her conversation with WIRED was the first she had heard of the ChatGPT ban in her district and that she was not directly informed by the New York City Department of Education.
Ramirez is the kind of teacher who will do a dramatic reading to get her kids, many of whom are special education and English language learners, hyped up about a Queen Elizabeth I speech. She’s not worried about ChatGPT. She makes an analogy with how she allows her English language learner students to use Google Translate but also helps them see where the technology falls short, and when it’s appropriate to use. She sees ChatGPT similarly: beneficial with a teacher’s guidance but ultimately limited.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When Gibson returned to school in Oregon for the new year, her plans to introduce ChatGPT to her students were thwarted—her school had banned the bot from school networks. So instead, she showed her senior AP literature class ChatGPT using screenshots of the tool.
This semester, students are reading Death of a Salesman , Wuthering Heights , and Toni Morrison’s Song of Solomon.
As she explained in a TikTok about her lesson plan, she will have her students write an original thesis statement in class about the text they're reading. Then, the class will use ChatGPT to generate essays based on that thesis statement. (To sidestep the school's ChatGPT blockade, Gibson will use her own device to generate the essays.) Students must then take apart and improve upon the ChatGPT-generated essay—an exercise designed to teach critical analysis, the craft of precise thesis statements, and a feel for what “good writing” looks like.
Gibson is hopeful but also recognizes the technology is still new, and its role in education largely undefined. “Like so many things, it’s just gonna be on the shoulders of teachers to figure this out,” she says. At the time of writing, Gibson’s students had just submitted their first round of essays where she allowed them to use AI at home without repercussions. She’s still asking her school to allow students to access ChatGPT.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Digital Producer X Topics artificial intelligence algorithms learning education languages ChatGPT magazine-31.04 Khari Johnson Will Knight Khari Johnson Niamh Rowe Will Bedingfield Khari Johnson Will Knight Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
802 | 2,021 | "Americans Need a Bill of Rights for an AI-Powered World | WIRED" | "https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Eric Lander Alondra Nelson Ideas Americans Need a Bill of Rights for an AI-Powered World Photo-Illustration: Sam Whitney; Getty Images Save this story Save Save this story Save Application Ethics Face recognition Regulation Prediction Source Data Biometric In the past decade, data-driven technologies have transformed the world around us. We’ve seen what’s possible by gathering large amounts of data and training artificial intelligence to interpret it: computers that learn to translate languages , facial recognition systems that unlock our smartphones , algorithms that identify cancers in patients. The possibilities are endless.
Eric Lander is science adviser to the president and director of the White House Office of Science and Technology Policy.
Alondra Nelson is deputy director for science and society at the White House Office of Science and Technology Policy.
But these new tools have also led to serious problems. What machines learn depends on many things—including the data used to train them.
Data sets that fail to represent American society can result in virtual assistants that don’t understand Southern accents; facial recognition technology that leads to wrongful, discriminatory arrests; and health care algorithms that discount the severity of kidney disease in African Americans, preventing people from getting kidney transplants.
Training machines based on earlier examples can embed past prejudice and enable present-day discrimination. Hiring tools that learn the features of a company’s employees can reject applicants who are dissimilar from existing staff despite being well qualified—for example, women computer programmers.
Mortgage approval algorithms to determine credit worthiness can readily infer that certain home zip codes are correlated with race and poverty, extending decades of housing discrimination into the digital age. AI can recommend medical support for groups that access hospital services most often, rather than those who need them most. Training AI indiscriminately on internet conversations can result in “ sentiment analysis” that views the words “Black,” “Jew,” and “gay” as negative.
These technologies also raise questions about privacy and transparency. When we ask our smart speaker to play a song, is it recording what our kids say ? When a student takes an exam online, should their webcam be monitoring and tracking their every move? Are we entitled to know why we were denied a home loan or a job interview? Additionally, there’s the problem of AI being deliberately abused.
Some autocracies use it as a tool of state-sponsored oppression, division, and discrimination.
In the United States, some of the failings of AI may be unintentional, but they are serious and they disproportionately affect already marginalized individuals and communities. They often result from AI developers not using appropriate data sets and not auditing systems comprehensively, as well as not having diverse perspectives around the table to anticipate and fix problems before products are used (or to kill products that can’t be fixed).
In a competitive marketplace, it may seem easier to cut corners. But it’s unacceptable to create AI systems that will harm many people, just as it’s unacceptable to create pharmaceuticals and other products—whether cars, children’s toys, or medical devices—that will harm many people.
Americans have a right to expect better. Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly. Codifying these ideas can help ensure that.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.
Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you.
Of course, enumerating the rights is just a first step. What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this “bill of rights,” or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices.
In the coming months, the White House Office of Science and Technology Policy (which we lead) will be developing such a bill of rights, working with partners and experts across the federal government, in academia, civil society, the private sector, and communities all over the country.
Technology can only work for everyone if everyone is included, so we want to hear from and engage with everyone. You can email us directly at [email protected].
We’re starting today with a public request for information about technologies used to identify people and infer attributes, often called biometrics—including facial recognition , but also systems that can recognize and analyze your voice , physical movements and gestures, heart rate, and more. We’re starting here because of how widely they’re being adopted, and how rapidly they’re evolving, not just for identification and surveillance, but also to infer our emotional states and intentions.
We want to hear from experts on biometric data collection and use, but also many others: travelers who’ve been asked to scan their faces before boarding a plane, workers whose employers gave them fitness trackers to monitor for fatigue , and teachers whose virtual lecture software purports to show which students aren’t paying attention in class.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg We want to hear from HR professionals whose hiring software might be using voice and behavioral analysis, from IT professionals and consumers who are buying and setting up these technologies, from data scientists and software engineers who are designing and building them—and anyone else who’s encountered these technologies in their daily life. Whatever your perspective, we’re eager to listen.
Developing a bill of rights for an AI-powered world won’t be easy, but it’s critical.
From its founding, America has been a work in progress—aspiring to values, recognizing shortcomings, and working to fix them. We should hold AI to this standard as well. It’s on all of us to ensure that data-driven technologies reflect, and respect, our democratic values.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here , and see our submission guidelines here.
Submit an op-ed at [email protected].
📩 The latest on tech, science, and more: Get our newsletters ! Is Becky Chambers the ultimate hope for science fiction? An excerpt from The Every, Dave Eggers' new novel Why James Bond doesn't use an iPhone The time to buy your holiday presents now Religious exemptions for vaccine mandates shouldn't exist 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Topics Wired Opinion artificial intelligence ethics Tech Policy and Law government Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
803 | 2,020 | "MIT Cuts Ties With a Chinese AI Firm Amid Human Rights Concerns | WIRED" | "https://www.wired.com/story/mit-cuts-ties-chinese-ai-firm-human-rights" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business MIT Cuts Ties With a Chinese AI Firm Amid Human Rights Concerns MIT has faced controversy over several funding sources in recent years, including multiple Chinese companies.
Photograph: Maddie Meyer/Getty Images Save this story Save Save this story Save Application Ethics Human-computer interaction Regulation Safety Text analysis End User Research Sector IT Public safety Research Source Data Images Speech Text Video Technology Machine learning Machine vision Natural language processing MIT has terminated a research collaboration with iFlytek, a Chinese artificial intelligence company accused of supplying technology for surveilling Muslims in the northwestern province of Xinjiang.
The university canceled the relationship in February after reviewing an upcoming project under tightened guidelines governing funding from companies in China, Russia, and Saudi Arabia. MIT has not said why it terminated the iFlytek collaboration or disclosed details about the project that prompted the review, but it has faced pushback from some students and staff about the arrangement since it began two years ago.
“We take very seriously concerns about national security and economic security threats from China and other countries, and human rights issues,” says Maria Zuber , vice president of research at MIT.
US companies and universities have built ties with Chinese tech firms in recent years. But the relationships have come under increasing scrutiny as relations between the two countries have soured.
MIT announced what was supposed to be a five-year collaboration with iFlytek with fanfare in June 2018. Since then, iFlytek has helped fund a range of research on subjects including human-computer interaction, new approaches to machine learning, and applied voice recognition. Under the agreement, iFlytek selected existing projects to fund but MIT says the company did not receive special access to the work or receive proprietary data or code. The amount of money involved was not disclosed.
The arrangement became more controversial in October 2019, when the US government banned six Chinese AI companies , including iFlytek, from doing business with American firms for reportedly supplying technology used to oppress minority Uighurs in Xinjiang. In 2017, Human Rights Watch claimed iFlytek supplied police departments in Xinjiang with technology for identifying people using their voiceprints.
Press reports paint a grim picture of widespread surveillance in the province, including the detention and disappearance of more than 1 million people.
iFlytek is one of China’s older AI companies, and while it specializes in voice recognition, it also offers tools for analyzing legal documents and medical imagery. Like other growing Chinese AI companies, contracts to supply software for processing video and audio to police departments and local governments are an important source of revenue.
The company said MIT’s decision was disappointing. “We are particularly sorry about this,” says Jiang Tao, a senior VP at iFlytek. “The vision of the cooperation was to build a better world with artificial intelligence together.” Like other US universities, MIT receives funding from companies and individual donors, but several of its arrangements have proved controversial. In February 2019, the university reexamined funding from Saudi Arabia following the assassination of the journalist Jamal Khashoggi. The tighter guidelines for working with foreign companies were issued in April 2019 amid scrutiny of MIT’s relationship with two other Chinese companies, Huawei and ZTE. MIT had cut funding relationships with those companies in 2018 as the US government investigated their roles in alleged violations of US sanctions. In January 2020, MIT released the results of an investigation into funding from the convicted sex offender Jeffrey Epstein.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2018, MIT received a onetime donation of an undisclosed sum from SenseTime , another Chinese AI company now subject to the US government restrictions. The gift was reviewed by MIT’s Interim Gift Acceptance Committee, and an MIT spokesperson says there are no plans to return it.
US officials are increasingly wary of Chinese companies developing advanced technologies, amid rising trade tensions, accusations of intellectual property theft, and a heightened sense of international competition. Over the past two years, US intelligence agencies have repeatedly warned universities to watch for signs of espionage by Chinese students and professors, and prosecuted both Chinese-born and US academics for stealing intellectual property. In a meeting with senior figures at MIT in November 2019, Michael Kratsios, the US chief technology officer, warned against working with Chinese AI companies, according to a person familiar with the discussion.
Paul Triolo , a practice head at Eurasia Group specializing in global technology policy, says concerns over human rights violations are legitimate but the signals coming from the US government have been ambiguous. “Is this some sort of just punishment or really legitimate effort to try to change behavior?” he asks. “The danger is sort of painting them all with one brush, and not looking at what they're actually doing in Xinjiang, and how much they are taking steps to step away from that.” Triolo says a complete unraveling of relations between the US and China will harm American AI too. He notes that China’s tech industry is making rapid progress in medical uses of AI, for example: “The flow of knowledge is not one way.” MIT’s Zuber says the university doesn’t want to walk away from China. “We want to be able to draw the best talent in the world, and some of that best talent comes from China,” she adds. “The wrong thing to do is say we’re never going to work with these international entities under any circumstances and we’re just going to lock our doors.” Zuber also says “global collaborations are extremely important.” When it comes to China, it may be difficult to ignore outcry over human rights issues. Zulkayda Mamat, a graduate student of Uighur descent who was critical of MIT’s ties to Chinese AI companies while studying there, welcomed the news but says MIT should scrutinize collaborations carefully. “I hope that it continues the process of reevaluation for all projects,” she pointed out. “[A] lack of vigilance will certainly put it on the wrong side of history.” The 14 best shows to stream right now Why old-growth trees are crucial to fighting climate change Rivian wants to bring electric trucks to the masses Delivery apps offer restaurants a lifeline—at a cost Sorry, immunity to Covid-19 won't be like a superpower 👁 Why can't AI grasp cause and effect ? Plus: Get the latest AI news 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Senior Writer X Topics China artificial intelligence machine learning face recognition Nelson C.J.
Lila Hassan Joel Khalili Peter Guest Aarian Marshall Matt Burgess Paresh Dave Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
804 | 2,022 | "Google’s New Robot Learned to Take Orders by Scraping the Web | WIRED" | "https://www.wired.com/story/google-robot-learned-to-take-orders-by-scraping-the-web" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Will Knight Business Google’s New Robot Learned to Take Orders by Scraping the Web Courtesy of Google Save this story Save Save this story Save Application Human-computer interaction Personal assistant Robotics End User Consumer Sector Consumer services Source Data Text Technology Natural language processing Robotics Late last week, Google research scientist Fei Xia sat in the center of a bright, open-plan kitchen and typed a command into a laptop connected to a one-armed, wheeled robot resembling a large floor lamp. “I’m hungry,” he wrote. The robot promptly zoomed over to a nearby countertop, gingerly picked up a bag of multigrain chips with a large plastic pincer, and wheeled over to Xia to offer up a snack.
The most impressive thing about that demonstration, held in Google’s robotics lab in Mountain View, California, was that no human coder had programmed the robot to understand what to do in response to Xia’s command. Its control software had learned how to translate a spoken phrase into a sequence of physical actions using millions of pages of text scraped from the web.
That means a person doesn’t have to use specific preapproved wording to issue commands, as can be necessary with virtual assistants such as Alexa or Siri. Tell the robot “I’m parched,” and it should try to find you something to drink; tell it “Whoops, I just spilled my drink,” and it ought to come back with a sponge.
Courtesy of Google “In order to deal with the diversity of the real world, robots need to be able to adapt and learn from their experiences,” Karol Hausman, a senior research scientist at Google, said during the demo, which also included the robot bringing a sponge over to clean up a spill. To interact with humans, machines must learn to grasp how words can be put together in a multitude of ways to generate different meanings. “It’s up to the robot to understand all the little subtleties and intricacies of language,” Hausman said.
Google’s demo was a step toward the longstanding goal of creating robots capable of interacting with humans in complex environments. In the past few years, researchers have found that feeding huge amounts of text taken from books or the web into large machine learning models can yield programs with impressive language skills , including OpenAI’s text generator GPT-3.
By digesting the many forms of writing online, software can pick up the ability to summarize or answer questions about text, generate coherent articles on a given subject, or even hold cogent conversations.
Google and other Big Tech firms are making wide use of these large language models for search and advertising. A number of companies offer the technology via cloud APIs, and new services have sprung up applying AI language capabilities to tasks like generating code or writing advertising copy.
Google engineer Blake Lemoine was recently fired after publicly warning that a chatbot powered by the technology, called LaMDA , might be sentient.
A Google vice president who remains employed at the company wrote in The Economist that chatting with the bot felt like “talking to something intelligent.” Despite those strides, AI programs are still prone to becoming confused or regurgitating gibberish. Language models trained with web text also lack a grasp of truth and often reproduce biases or hateful language found in their training data, suggesting careful engineering may be required to reliably guide a robot without it running amok.
The robot demonstrated by Hausman was powered by the most powerful language model Google has announced so far, known as PaLM.
It is capable of many tricks, including explaining, in natural language, how it comes to a particular conclusion when answering a question. The same approach is used to generate a sequence of steps that the robot will execute to perform a given task.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Researchers at Google worked with hardware from Everyday Robots , a company spun out of Google parent Alphabet’s X division dedicated to “moonshot” research projects to create the robot butler.
They created a new program that uses the text processing capabilities of PaLM to translate a spoken phrase or command into a sequence of appropriate actions such as “open drawer” or “pick up chips” that the robot can perform.
The robot’s library of physical actions was learned through a separate training process in which humans remotely controlled the robot to demonstrate how to do things like pick up objects. The robot has a limited set of tasks that it can perform within its environment, which helps prevent misunderstandings by the language model from becoming errant behavior.
PaLM’s language skills can allow a robot to make sense of relatively abstract commands. When a robot arm was tasked with moving colored blocks and bowls around, Google research scientist Andy Zeng asked it to “imagine that my wife is the blue block and I am the green block. Bring us closer together.” The robot responded by moving the blue block to sit next to the green block.
"Applying large language models to robotics is an exciting direction," says Stefanie Tellex , an assistant professor at Brown University who specializes in robot learning and robot-human collaboration. But she adds that broadening the range of tasks that a robot can perform—so that it can do more things that a person might ask—remains "a large unsolved problem." Brian Ichter, a research scientist at Google involved with the project, acknowledges that “plenty of things” can still befuddle the Google kitchen robot. Simply changing the lighting or moving an object can cause the machine to fail to grasp an object correctly, illustrating how robots can struggle with physical tasks that are trivial for humans.
It is also unclear whether the system would handle complex sentences or commands as smoothly as the short commands it responded to in demos. AI advances have already expanded abilities for robots; for example, industrial robots can identify products or spot defects in factories. Many researchers are also exploring ways for robots to learn through practice, in the real world or in simulation, and from observation. But demos that seem impressive often work in only a limited setting.
Ichter says the project may lead to methods of imbuing language models with better understandings of physical reality. Mistakes made by AI language software are often underpinned by a lack of common sense knowledge , which humans use to make sense of the ambiguities of language. “Language models haven’t really experienced the world in any way. They only reflect the statistics of the words they have read on the internet,” Ichter says.
Google’s research project is a long way from being a product, but many of the company’s rivals have recently taken a new interest in home robots. Last September, Amazon demonstrated Astro , a home robot with far more limited abilities; this month the company announced that it plans to buy iRobot , the company behind the popular Roomba robot vacuum cleaner. Elon Musk has promised that Tesla will build a humanoid robot, although details on the project are scarce, and it may be more of a recruiting pitch than a product announcement.
Senior Writer X Topics artificial intelligence Gregory Barber Paresh Dave Amanda Hoover Caitlin Harrington Will Knight Paresh Dave Steven Levy Will Knight Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
805 | 2,022 | "The Fight Over Which Uses of Artificial Intelligence Europe Should Outlaw | WIRED" | "https://www.wired.com/story/europe-law-outlaw-ai" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Khari Johnson Business The Fight Over Which Uses of AI Europe Should Outlaw Photograph: SAKIS MITROLIDIS/Getty Images Save this story Save Save this story Save Application Prediction Surveillance Regulation End User Government Sector Public safety Source Data Biometric In 2019, guards on the borders of Greece, Hungary, and Latvia began testing an artificial-intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to attempt to spot signs a person was lying to a border agent. The trial was propelled by nearly $5 million in European Union research funding, and almost 20 years of research at Manchester Metropolitan University, in the UK.
The trial sparked controversy. Polygraphs and other technologies built to detect lies from physical attributes have been widely declared unreliable by psychologists. Soon, errors were reported from iBorderCtrl, too. Media reports indicated that its lie-prediction algorithm didn’t work , and the project’s own website acknowledged that the technology “may imply risks for fundamental human rights.” This month, Silent Talker, a company spun out of Manchester Met that made the technology underlying iBorderCtrl, dissolved. But that’s not the end of the story. Lawyers, activists, and lawmakers are pushing for a European Union law to regulate AI, which would ban systems that claim to detect human deception in migration—citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.
A ban on AI lie detectors at borders is one of thousands of amendments to the AI Act being considered by officials from EU nations and members of the European Parliament. The legislation is intended to protect EU citizens’ fundamental rights , like the right to live free from discrimination or to declare asylum. It labels some use cases of AI “high-risk,” some “low-risk,” and slaps an outright ban on others. Those lobbying to change the AI Act include human rights groups, trade unions, and companies like Google and Microsoft , which want the AI Act to draw a distinction between those who make general-purpose AI systems, and those who deploy them for specific uses.
Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the act to ban the use of AI polygraphs that measure things like eye movement, tone of voice, or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI Act as written would allow use of systems like iBorderCtrl, adding to Europe’s existing “publicly funded border AI ecosystem.” The analysis calculated that over the past two decades, roughly half of the €341 million ($356 million) in funding for use of AI at the border, such as profiling migrants, went to private companies.
The use of AI lie detectors on borders effectively creates new immigration policy through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, labeling everyone as suspicious. “You have to prove that you are a refugee, and you're assumed to be a liar unless proven otherwise,” she says. “That logic underpins everything. It underpins AI lie detectors, and it underpins more surveillance and pushback at borders.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Molnar, an immigration lawyer, says people often avoid eye contact with border or migration officials for innocuous reasons—such as culture, religion, or trauma—but doing so is sometimes misread as a signal a person is hiding something. Humans often struggle with cross-cultural communication or speaking to people who experienced trauma, she says, so why would people believe a machine can do better? The first draft of the AI Act released in April 2021 listed social credit scores and real-time use of facial recognition in public places as technologies that would be banned outright. It labeled emotion recognition and AI lie detectors for border or law enforcement as high-risk, meaning deployments would have to be listed on a public registry. Molnar says that wouldn’t go far enough, and the technology should be added to the banned list.
Dragoș Tudorache, one of two rapporteurs appointed by members of the European Parliament to lead the amendment process, said lawmakers filed amendments this month, and he expects a vote on them by late 2022. The parliament’s rapporteurs in April recommended adding predictive policing to the list of banned technologies, saying it “violates the presumption of innocence as well as human dignity,” but did not suggest adding AI border polygraphs. They also recommended categorizing systems for patient triage in health care or deciding whether people get health or life insurance as high-risk.
While the European Parliament proceeds with the amendment process, the Council of the European Union will also consider amendments to the AI Act. There, officials from countries including the Netherlands and France have argued for a national security exemption to the AI Act, according to documents obtained with a freedom of information request by the European Center for Not-for-Profit Law.
Vanja Skoric, a program director with the organization, says a national security exemption would create a loophole that AI systems that endanger human rights—such as AI polygraphs—could slip through and into the hands of police or border agencies.
Final measures to pass or reject the law could take place by late next year. Before members of the European Parliament filed their amendments on June 1, Tudorache told WIRED, “If we get amendments in the thousands as some people anticipate, the work to actually produce some compromise out of thousands of amendments will be gigantic.” He now says about 3,300 amendment proposals to the AI Act were received but thinks the AI Act legislative process could wrap up by mid-2023.
Concerns that data-driven predictions can be discriminatory are not just theoretical. An algorithm deployed by the Dutch tax authority to detect potential child benefit fraud between 2013 to 2020 was found to have harmed tens of thousands of people , and led to more than 1,000 children being placed in foster care.
The flawed system used data such as whether a person had a second nationality as a signal for investigation, and it had a disproportionate impact on immigrants.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Dutch social benefits scandal might have been prevented or lessened had Dutch authorities produced an impact assessment for the system, as proposed by the AI Act, that could have raised red flags, says Skoric. She argues that the law must have a clear explanation for why a model earns certain labels, for example when rapporteurs moved predictive policing from the high-risk category to a recommended ban.
Alexandru Circiumaru, European public policy lead at the independent research and human rights group the Ada Lovelace Institute, in the UK, agrees, saying the AI Act needs to better explain the methodology that leads to a type of AI system being recategorized from banned to high-risk or the other way around. “Why are these systems included in those categories now, and why weren’t they included before? What’s the test?” he asks.
More clarity on those questions is also necessary to prevent the AI Act from quashing potentially empowering algorithms, says Sennay Ghebreab, founder and director of the Civic AI Lab at the University of Amsterdam. Profiling can be punitive, as in the Dutch benefits scandal, and he supports a ban on predictive policing. But other algorithms can be helpful—for example, in helping resettle refugees by profiling people based on their background and skills. A 2018 study published in Science calculated that a machine-learning algorithm could expand employment opportunities for refugees in the United States more than 40 percent and more than 70 percent in Switzerland, at little cost.
“I don't believe we can build systems that are perfect,” he says. “But I do believe that we can continuously improve AI systems by looking at what went wrong and getting feedback from people and communities.” Many of the thousands of suggested changes to the AI Act won’t be integrated into the final version of the law. But Petra Molnar of the Refugee Law Lab, who has suggested nearly two dozen changes, including banning systems like iBorderCtrl, says that it’s an important time to be clear about which forms of AI should be banned or deserve special care.
“This is a really important opportunity to think through what we want our world to look like, what we want our societies to be like, what it actually means to practice human rights in reality, not just on paper,” she says. “It's about what we owe to each other, what kind of world we're building, and who was excluded from these conversations.” You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence Khari Johnson Will Knight Andy Greenberg Amit Katwala David Gilbert Andy Greenberg Amit Katwala Kari McMahon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
806 | 2,020 | "An Algorithm That 'Predicts' Criminality Based on a Face Sparks a Furor | WIRED" | "https://www.wired.com/story/algorithm-predicts-criminality-based-face-sparks-furor" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Sidney Fussell Business An Algorithm That ‘Predicts’ Criminality Based on a Face Sparks a Furor A since-deleted press release claimed an algorithm could predict whether someone would become a criminal by analyzing their face.
Photograph: OCTAVIO ICONS/Alamy Save this story Save Save this story Save Application Ethics Face recognition Prediction End User Government Sector Public safety Technology Machine vision In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.
With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing , claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.
Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.
But the researchers say the problem doesn't stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.” Advances in data science and machine learning have led to numerous algorithms in recent years that purport to predict crimes or criminality. But if the data used to build those algorithms is biased, the algorithms’ predictions will also be biased. Because of the racially skewed nature of policing in the US, the letter argues, any predictive algorithm modeling criminality will only reproduce the biases already reflected in the criminal justice system.
Mapping these biases onto facial analysis recalls the abhorrent “race science” of prior centuries, which purported to use technology to identify differences between the races—in measurements such as head size or nose width—as proof of their innate intellect, virtue, or criminality.
Race science was debunked long ago, but papers that use machine learning to “predict” innate attributes or offer diagnoses are making a subtle, but alarming return.
In 2016 researchers from Shanghai Jiao Tong University claimed their algorithm could predict criminality using facial analysis. Engineers from Stanford and Google refuted the paper’s claims, calling the approach a new “physiognomy,” a debunked race science popular among eugenists, which infers personality attributes from the shape of someone’s head.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2017 a pair of Stanford researchers claimed their artificial intelligence could tell if someone is gay or straight based on their face. LGBTQ organizations lambasted the study, noting how harmful the notion of automated sexuality identification could be in countries that criminalize homosexuality. Last year, researchers at Keele University in England claimed their algorithm trained on YouTube videos of children could predict autism. Earlier this year, a paper in the Journal of Big Data not only attempted to “infer personality traits from facial images,” but cited Cesare Lombroso, the 19th-century scientist who championed the notion that criminality was inherited.
Each of those papers sparked a backlash, though none led to new products or medical tools. The authors of the Harrisburg paper, however, claimed their algorithm was specifically designed for use by law enforcement.
“Crime is one of the most prominent issues in modern society,” said Jonathan W. Korn, a PhD student at Harrisburg and former New York police officer, in a quote from the deleted press release. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of [a] person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.” Korn didn’t respond to a request for comment. Nathaniel Ashby, one of the paper’s coauthors, declined to comment.
Springer Nature did not respond to a request for comment before this article was initially published. In a statement after the article was initially published, Springer said, “We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process. The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June. The details of the review process and conclusions drawn remain confidential between the editor, peer reviewers and authors.” Civil liberties groups have long warned against law enforcement use of facial recognition. The software is less accurate on darker-skinned people than lighter-skinned people, according to a report from AI researchers Timnit Gebru and Joy Buolamwini, both of whom signed the CCT letter.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 2018, the ACLU found that Amazon’s facial-recognition product, Rekognition, misidentified members of Congress as criminals, erring more frequently on black officials than white ones. Amazon recently announced a one-year moratorium on selling the product to police.
The Harrisburg paper has seemingly never been publicly posted, but publishing problematic research alone can be dangerous. Last year Berlin-based security researcher Adam Harvey found that facial-recognition data sets from American universities were used by surveillance firms linked to the Chinese government. Because AI research created for one purpose can be used for another, papers require intense ethical scrutiny even if they don’t directly lead to new products or methods.
“Like computers or the internal combustion engine, AI is a general-purpose technology that can be used to automate a great many tasks, including ones that should not be undertaken in the first place,” the letter reads.
Updated, 6-24-20, 1:30pm ET: This article has been updated to include a statement from Springer Nature.
The Last of Us Part II and its crisis-strewn path to release Former eBay execs allegedly made life hell for critics The best sex tech and toys for every body AI, AR, and the (somewhat) speculative future of a tech-fueled FBI Facebook groups are destroying America 👁 What is intelligence, anyway ? Plus: Get the latest AI news ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Senior Writer X Topics face recognition artificial intelligence algorithms machine learning ethics Steven Levy Nelson C.J.
Peter Guest Andy Greenberg Joel Khalili David Gilbert Kari McMahon Jacopo Prisco Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
807 | 2,023 | "AI experts draft algorithmic bill of rights to protect us from Big Tech - Vox" | "https://www.vox.com/the-highlight/2019/5/22/18273284/ai-algorithmic-bill-of-rights-accountability-transparency-consent-bias" | "We have a request Vox's journalism is free, because we believe that everyone deserves to understand the world they live in. Reader support helps us do that. Can you chip in to help keep Vox free for all? × Amanda Northrop/Vox Filed under: Technology 10 things we should all demand from Big Tech right now We need an algorithmic bill of rights. AI experts helped us write one.
By Sigal Samuel Updated May 29, 2019, 9:30am EDT Share this story Share this on Facebook Share this on Twitter Share All sharing options Share All sharing options for: 10 things we should all demand from Big Tech right now Reddit Pocket Flipboard Email A woman’s job application is rejected because of a recruiting algorithm that favors men’s résumés. A girl dies by suicide after graphic images of self-harm are pushed up on her feed by social media algorithms.
A black teen steals something and gets rated high-risk for committing future crime by an algorithm used in courtroom sentencing , while a white man steals something of similar value and gets rated low-risk.
In recent years, advances in computer science have yielded algorithms so powerful that their creators have presented them as tools that can help us make decisions more efficiently and impartially. But the idea that algorithms are unbiased is a fantasy; in fact, they still end up reflecting human biases. And as they become ever more ubiquitous, we need to get clear on what they should — and should not — be allowed to do.
In a new book, A Human’s Guide to Machine Intelligence , Kartik Hosanagar, a University of Pennsylvania technology professor, argues we need an algorithmic bill of rights to protect us from the many risks AI is introducing into our lives, alongside the various benefits.
People have called for such protections in the past, and in April, Sens. Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the Algorithmic Accountability Act.
If passed, it would require companies to audit their algorithms for bias and discrimination.
Some AI experts praised it as a “great first step” but noted that it leaves a number of concerns unaddressed.
All this got me wondering: Which demands, exactly, belong on an algorithmic bill of rights? So I reached out to 10 experts (including Hosanagar) who are at the forefront of investigating how AI risk is creeping into the mundane aspects of life as well as high-stakes fields like immigration, medicine, and criminal justice. I asked them each to name a protection the public needs enshrined in law.
Allow me to present the result: a crowdsourced algorithmic bill of rights.
Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
Transparency is the No. 1 concern on people’s minds, judging by the responses I received. “We’re not even fully aware of when an algorithm is being used to make decisions for us or about us,” Hosanagar told me.
Say you’re applying for a mortgage. You deserve to know: Is an algorithm being used to make a decision about you? Is that decision based solely on the information you put down on your application form, or are your social media posts and other data obtained from third-party sources also being used? How does the mortgage approval algorithm rate different factors — does it place the greatest weight on income, medium weight on education, and low weight on current address, for example? Cathy O’Neil, the author of Weapons of Math Destruction , put it this way in an email: “In situations where our financial lives, our livelihood, or our liberty is at risk — so not in the case of every algorithm under the sun — we should know what attributes about us are being used [and] we should know how our ‘scores’ depended on the values of those attributes.” Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
Related to transparency is the demand for explainability. All algorithmic systems should carry something akin to a nutritional label laying out what went into them, according to Amy Webb, author of The Big Nine and founder of the Future Today Institute, which researches emerging technologies.
“The terms of service for an AI application — or any service that uses algorithmic decision-making processes — should be written in language plain enough that a third grader can comprehend it,” she said. “It should be available in every language as soon as the application goes live.” Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
A demand for the right to consent has been gathering steam as more people realize that images of their faces are being used to power facial recognition technology.
NBC reported that IBM had scraped a million photos of faces from the website Flickr — without the subjects’ or photographers’ permission. The news sparked a backlash. People may have consented to having their photos up on Flickr, but they hadn’t imagined their images would be used to train a technology that could one day be used to surveil them. Some states, like Oregon and Washington , are currently considering bills to regulate facial recognition.
The issue of consent extends well beyond that particular technology. Imagine you’re applying for a new job. Your prospective bosses inform you that your interview will be conducted by a robot — a practice that’s already in use today.
Regardless of what they tout as the benefits of this AI system, you should have the right to give or withhold consent, according to MIT computer scientist Joy Buolamwini. “Permission must be granted,” she said, “not taken for granted.” Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes.
Like Buolamwini, who founded the Algorithmic Justice League to fight bias in automated decision-making systems, Yeshimabeit Milner is deeply concerned about AI that discriminates against people of color. As the founder and executive director of Data for Black Lives , Milner has drawn attention to problems with predictive policing (algorithmic systems for predicting where crime is likely to occur) and criminal risk assessments (algorithmic systems for predicting recidivism). Police officers and judges use both these systems to guide their decisions, despite evidence that they’re biased against black people.
Algorithmic bias can result when the initial data used to train an AI system isn’t diverse enough (say, if it includes mostly white men) or if it reflects biased decisions authorities made in the past. For example, if officers overpoliced a certain neighborhood, yielding a high rate of arrests there, and that arrest data is used to train an AI, the system could end up reinforcing the old bias.
Explaining why it’s so crucial for the law to protect against algorithmic bias, Milner said, “If a defendant is labeled ‘high risk’ by a recidivism algorithm, that can mean the difference between a fine and a prison sentence.” She added that where discriminatory algorithms go unchecked, they extend the shelf life of racist public policy: “The harms caused by decades of redlining have been amplified by new forms of ‘digital redlining’ like credit scores and predictive policing. It is not a coincidence that the communities that were labeled hazardous on redlining maps in 1933 are the predictive policing hotspots of today.” Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
The appropriate type and level of control we have over a given algorithm will depend on its specific use. But we should always be able to communicate with an algorithmic system that’s making decisions for us. As Hosanagar explains in his book, “It can be as limited and straightforward as giving a Facebook user the power to flag a news post as potentially false; it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making.” Portability: We have the right to easily transfer all our data from one provider to another.
The big companies running our data through their algorithms — such as Facebook, Twitter, Google, and Microsoft — haven’t generally made it easy for us to take back our data and opt out. Pedro Domingos, a University of Washington computer science professor, wants us to be able to transfer our data from one provider to another with one click. Without that portability, he said, “we risk being locked into one of the big providers, with increasingly negative consequences as the data becomes more important.” Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
Too often, companies view their data as proprietary and don’t want to release it to researchers for external audit. “Corporate secrecy laws are a barrier to due process,” said Jason Schultz, the AI Now Institute’s research lead for law and policy. “They contribute to the ‘black box effect,’ rendering systems opaque and unaccountable, making it hard to assess bias.” The right to redress — and all of the above rights — should supersede corporate secrecy laws that stand in the way of due process.
Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
Because every citizen will be affected by algorithms, every citizen should have the opportunity to learn what algorithms are, how they work, and which risks they pose. Yet not everyone has the disposable income and time required to learn about them. Governments should offer this education for free.
Finland offers a promising example. Last year, the Nordic country announced plans to teach the basics of AI to 1 percent of its population — about 55,000 people — using a free online course called Elements of AI. The idea was to start with that relatively modest number and slowly build up. In short order, 140,000 people around the world registered for it (encouragingly, 40 percent were women). The scheme took off in part because it was pitched as a national challenge, and in part because it assured people with zero tech background that they could come to understand AI “with no complicated math or programming required,” as the course website says. The program has since spread to Sweden.
Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
Just as important as making sure algorithms are tested for bias before they’re rolled out is making sure they’re examined for unintended effects after they’re used. Eric Topol, a physician and the author of Deep Medicine , told me too many algorithms are validated only on computers, not in real-world clinical environments. “We have already learned that there is a chasm between the accuracy of an algorithm, especially determined this way, and a favorable impact on clinical outcomes,” he said, explaining that just because an algorithm appears to work great in a computer simulation doesn’t mean it’ll work as intended in all doctors’ offices.
Topol believes that after implementation of the algorithm in a clinical practice, the results should be assessed and made available to doctors and patients.
The broader concept here is that we need to make sure problematic incidents are investigated after they occur. Ben Shneiderman, a computer science professor at the University of Maryland, argues that we need to create a National Algorithms Safety Board for this purpose.
“Other fields, like aviation safety, have come to understand that independent oversight helps to prevent deadly outcomes,” Shneiderman told me. “Therefore, I have proposed a National Algorithms Safety Board, which would investigate deadly accidents, like the Boeing 737 Max and Tesla crashes.” Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.
Immigration is a high-stakes domain increasingly being guided by automated decision-making. In Europe, three countries are planning to test the use of AI lie detectors on asylum seekers at border patrol checkpoints. Even Canada, a country usually seen as refugee-friendly, is using AI to vet immigrants and refugees, sparking an outcry from human rights lawyers.
“The use of algorithms in migration can create a high-risk laboratory of technological experiments,” Petra Molnar, a Toronto-based lawyer, told me. “The nuanced and complex nature of refugee and immigration decisions may be lost on these technologies.” To keep algorithms from leading to serious breaches of human rights at the border, we need to promote international norms around them.
This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves? Inspired by Shneiderman’s vision, Hosanagar advocates for the creation of an independent Algorithmic Safety Board, modeled on the Federal Reserve Board. Each country would have its own board at the federal level, and these boards would talk to one another and ensure there is some consistency across countries. Right now, some places have aggressive regulatory legislation (like the EU’s General Data Protection Regulation), while in other countries there’s almost no regulation at all. Hosanagar wants to see international coordination handled by a body modeled on the International Telecom Union, which governs cellphone communication.
He argues for a mix of government regulation and self-regulation from tech companies. We need both, in his opinion, because tech companies can’t always be trusted to police themselves, and government regulators can’t always understand the complexities of fast-developing AI technology by themselves.
How the algorithmic bill of rights was made When I reached out to the 10 experts, some of them got back to me with more than one recommendation. But there was a lot of overlap between their ideas, so I streamlined them into the 10 demands listed above. Then I sent the completed bill to the experts so they could see what their peers had come up with and offer ideas for improvement.
Not every expert agreed with each and every item on the list. One or two took issue with specific recommendations, while others agreed with the broad strokes of all but had different ideas about how they should be implemented. Hosanagar, for example, said auditing doesn’t need to be performed by an oversight body, but could instead be done “by any independent team, including another company or another team within the organization.” Finally, I want to emphasize that by its very nature, a bill of rights like this is a work in progress. AI is developing so fast that as time ticks on, we’ll almost certainly become aware of the need for other protections from risks we haven’t yet imagined. For now, having a concrete, if provisional, list of demands may help catalyze public conversation and action. If you believe something important has been left off the list, you’re welcome to drop me a note.
Will you support Vox’s explanatory journalism? Most news outlets make their money through advertising or subscriptions. But when it comes to what we’re trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.
First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.
Second, we’re not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world — not just the people who can afford to pay for a subscription. We believe that’s an important part of building a more equal society. We can’t do that if we have a paywall.
That’s why we also turn to you, our readers, to help us keep Vox free.
If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today? One-Time Monthly Annual $5 /month $10 /month $25 /month $50 /month Other $ /month /month We accept credit card, Apple Pay, and Google Pay. You can also contribute via Life Why do we keep tabs on people we can’t stand? Future Perfect Why stop at the four-day workweek? Health Baby boomers are aging. Their kids aren’t ready.
Chorus Facebook Twitter YouTube About us Our staff Privacy policy Ethics & Guidelines How we make money Contact us How to pitch Vox Contact Send Us a Tip Vox Media Terms of Use Privacy Notice Cookie Policy Do Not Sell or Share My Personal Info Licensing FAQ Accessibility Platform Status Advertise with us Jobs @ Vox Media
" |
808 | 2,023 | "Hollywood Actors Strike Ends With a Deal That Will Impact AI and Streaming for Decades | WIRED" | "https://www.wired.com/story/hollywood-actors-strike-ends-ai-streaming" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Angela Watercutter Will Bedingfield Culture Hollywood Actors Strike Ends With a Deal That Will Impact AI and Streaming for Decades Photograph: David Livingston/Getty Images Save this story Save Save this story Save After 118 days on the picket lines, the longest such strike in Hollywood’s history, the Screen Actors Guild-American Federation of Television and Radio Artists has reached a deal with the Alliance of Motion Picture and Television Producers. Both sides were mum about the terms of the deal Wednesday night, but it comes following a long struggle over the use of artificial intelligence on actors’ performances and actors’ demands for residual payments for shows and films that play on streaming services.
Strikethrough Will Bedingfield Yippee Ki-Yay Will Bedingfield Stage Left Will Bedingfield A committee from SAG, which represents thousands of film and television actors, approved the agreement Wednesday. The strike itself, which has featured pickets outside the offices of Netflix, Disney, Warner Bros. Discovery, and others, will end Thursday morning. It’s expected that the tentative deal will head to the union’s national board to be approved on Friday.
Undeniably, this is a huge milestone for Hollywood, a $130 billion-plus industry that has all but ground to halt this year, as both the Writers Guild of America and SAG dug in their heels over fair wages and the use of AI in their work. WGA members went on strike in May; SAG walked off the job in July, the first time the industry had faced a dual work stoppage since 1960. The WGA strike ended in September with a historic deal that put up guardrails to protect writers from AI encroaching on their work.
As this year’s negotiations between SAG and AMPTP dragged on, generative AI became the major sticking point. Back in July, studios claimed they offered a “groundbreaking AI proposal that protects actors’ digital likenesses.” SAG countered that the proposal stipulated background performers could be scanned, paid for the day, and then turned into digital characters that studios could use “for the rest of eternity.” (AMPTP disputed this.
) The issue was volleyed back and forth until last weekend, when SAG reviewed the studios’ “last, best, and final” offer and rejected it, claiming “there are several essential items on which we still do not have an agreement, including AI.” A follow-up story in The Hollywood Reporter revealed that the AMPTP proposal sought to allow studios to pay for AI scans of what are known as Schedule F performers and, following the actors’ death, allow studios to use the scans without the consent of the estate or SAG. Schedule F performers include anyone who makes more than the minimum rate for TV series regulars or feature films. The guild wanted compensation for reuse of the scans, along with consent.
On Tuesday, the studios reportedly agreed to adjust the AI language in their proposal, a move that seems to have been the tipping point. Even though the terms of the tentative deal reached Thursday are unclear, it’s hard to imagine the actors didn’t get at least some of the AI protections they were seeking.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Since the actors initially went on strike in July, the conversation about artificial intelligence has morphed from dinner-party “What if?” chatter to a full-blown international issue. Last week, US president Joe Biden signed a broad executive order aimed at curtailing the power of commercial AI. The order didn’t address the effects of machine learning on Hollywood, per se, but SAG piggybacked on the announcement, posting on X that “for strong, safe, and responsible AI development and use, it is imperative that workers and unions remain at the forefront of policy development.” The overriding issue for both writers and actors going into Hot Strike Summer (which became Hot Strike Fall) was that it was becoming impossible for guild members to “ maintain a middle-class lifestyle ,” as the SAG website put it. Part of that calculus, of course, is ensuring that jobs don’t get usurped by AI but also that members get residual payments for streaming content that rivals what they would make if a TV show airs on a network. “They’ve got a 2023 business model for streaming with a 1970 business model for paying performers and writers and other creatives in the industry,” Duncan Crabtree-Ireland, executive director and chief negotiator for SAG-AFTRA, said in June. “That is not OK.” With both strikes now coming to an end, there is cause for optimism on these issues—albeit cautious optimism. It also means productions, from Gladiator 2 to Andor , can resume filming. “Within weeks of the strikes ending we will be back in the United Kingdom shooting the second half of Deadpool ,” Shawn Levy, who is directing the third Deadpool movie, told WIRED in August. Time to get on a plane, dude.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Senior Editor X Tumblr Staff writer X Topics Film Movies TV hollywood artificial intelligence streaming Kate Knibbs Vauhini Vara Jason Parham Virginia Heffernan Gideon Lichfield Kate Knibbs Lindsay Jones Jason Parham Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
809 | 2,019 | "Most Deepfakes Are Porn, and They're Multiplying Fast | WIRED" | "https://www.wired.com/story/most-deepfakes-porn-multiplying-fast" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tom Simonite Business Most Deepfakes Are Porn, and They're Multiplying Fast Play/Pause Button Pause Illustration: Elena Lacey; Getty Images Save this story Save Save this story Save Application Deepfakes Ethics Identifying Fabrications Regulation Sector Social media Video Public safety Source Data Video Technology Machine learning Machine vision In November 2017, a Reddit account called deepfakes posted pornographic clips made with software that pasted the faces of Hollywood actresses over those of the real performers. Nearly two years later, deepfake is a generic noun for video manipulated or fabricated with artificial intelligence software. The technique has drawn laughs on YouTube, along with concern from lawmakers fearful of political disinformation. Yet a new report that tracked the deepfakes circulating online finds they mostly remain true to their salacious roots.
Startup Deeptrace took a kind of deepfake census during June and July to inform its work on detection tools it hopes to sell to news organizations and online platforms. It found almost 15,000 videos openly presented as deepfakes—nearly twice as many as seven months earlier. Some 96 percent of the deepfakes circulating in the wild were pornographic, Deeptrace says.
The count is unlikely to be exhaustive, but the findings are a reminder that despite speculation about deepfakes destabilizing elections , the technology is mostly being used very differently, including as a tool for harassment.
One worrying trend: Deeptrace says the tools needed to create deepfakes are becoming more sophisticated and more widely available.
The startup's report describes a niche but thriving ecosystem of websites and forums where people share, discuss, and collaborate on pornographic deepfakes. Some are commercial ventures that run advertising around deepfake videos made by taking a pornographic clip and editing in a person's face without that individual's consent.
All the people edited into the pornographic clips Deeptrace found were women. Clips of the most popular figures—Western actresses and South Korean pop celebrities—had millions of views. Nonprofits have already reported that women journalists and political activists are being attacked or smeared with deepfakes. Henry Ajder, a researcher at Deeptrace who worked on the firm's report, says there are deepfake forums where users discuss or request pornographic deepfakes of women they know, such as ex-girlfriends, wanting to see them edited into a pornographic clip.
Danielle Citron, a law professor at Boston University, describes pornographic deepfakes made without a person’s consent as an “invasion of sexual privacy.” She spoke at a June hearing by the US House Intelligence Committee about artificial intelligence media manipulation tools.
The porn industry has helped pioneer new media technologies , from VHS and pop-up ads to streaming video. Citron says that the preponderance of pornographic deepfakes is a reminder of another consistent lesson from the history of technology: “At each stage we’ve seen that people use what’s ready and at hand to torment women. Deepfakes are an illustration of that.” Citron helped spur the recent spread of state legislation on revenge porn, which is now subject to laws in at least 46 states and the District of Columbia. California is among them; last week week its governor, Gavin Newsom, signed into law a bill that allows a person edited into sexually explicit material without consent to seek civil damages against the person who created or disclosed it.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The law professor also says she is currently talking with House and Senate lawmakers from both parties about new federal laws to penalize distribution of malicious forgeries and impersonations, including deepfakes. “We’ve been encouraged that the uptake has been swift,” she adds.
Last week, senators Marco Rubio, the Republican of Florida, and Mark Warner, the Democrat from Virginia, both of whom are members of the Senate Intelligence Committee, wrote to Facebook and 10 other social media sites seeking more details of how they plan to detect and respond to malicious deepfakes. The legislators cautioned that fake clips could have a “corrosive impact on our democracy.” Ajder of Deeptrace plays down fears that a fake clip could significantly affect the 2020 election. But the startup’s report notes that growing awareness of the technology can fuel political deception.
In June, a Malaysian political aide was arrested after a video surfaced purportedly showing him having sex with the country’s minister of economic affairs. (Gay sex is illegal in Malaysia.) The country’s prime minister said the video was a deepfake, but independent experts have been unable to determine if the video was manipulated. “Deepfakes can provide plausible deniability,” Ajder says.
To conduct its analysis, Deeptrace used a mixture of manual searching and web scraping tools and data analysis to record known deepfakes from major porn sites, mainstream video services such as YouTube, and deepfake-specific sites and forums.
That methodology is imperfect. It couldn’t account for deepfakes that successfully passed off as real clips or probe every hidden online corner. Jack Clark, policy director at independent AI lab OpenAI, says the Deeptrace report is nonetheless a welcome attempt to gather empirical evidence on deepfakes, which has been lacking.
Clark predicts that fake videos won’t be the first example of unsavory consequences from the spread of artificial intelligence tools through commercialization and open source.
“Individuals will mess around with the technology and some of the ways they mess around will be harmful and offensive,” he notes.
Even a small nuclear war could trigger a global apocalypse Teaching pilots a new trick: landing quietly The former Soviet Union's surprisingly gorgeous subways Why are rich people so mean ? A brutal murder, a wearable witness, and an unlikely suspect 👁 If computers are so smart, how come they can’t read ? Plus, check out the latest news on artificial intelligence ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.
Senior Editor X Topics artificial intelligence machine learning pornography Video Deepfakes Amy Martyn David Gilbert Matt Laslo Steven Levy Niamh Rowe Will Bedingfield Morgan Meaker Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
810 | 2,023 | "Facebook develops new method to reverse-engineer deepfakes and track their source - The Verge" | "https://www.theverge.com/2021/6/16/22534690/facebook-deepfake-detection-reverse-engineer-ai-model-hyperparameters" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Facebook / Artificial Intelligence Facebook develops new method to reverse-engineer deepfakes and track their source Facebook develops new method to reverse-engineer deepfakes and track their source / The work could help future deepfake investigations By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story Deepfakes aren’t a big problem on Facebook right now, but the company continues to fund research into the technology to guard against future threats. Its latest work is a collaboration with academics from Michigan State University (MSU), with the combined team creating a method to reverse-engineer deepfakes : analyzing AI-generated imagery to reveal identifying characteristics of the machine learning model that created it.
The work is useful as it could help Facebook track down bad actors spreading deepfakes on its various social networks. This content might include misinformation but also non-consensual pornography — a depressingly common application of deepfake technology. Right now, the work is still in the research stage and isn’t ready to be deployed.
The method could help track down those spreading deepfakes online Previous studies in this area have been able to determine which known AI model generated a deepfake, but this work, led by MSU’s Vishal Asnani, goes a step further by identifying the architectural traits of unknown models. These traits, known as hyperparameters, have to be tuned in each machine learning model like parts in an engine. Collectively, they leave a unique fingerprint on the finished image that can then be used to identify its source.
Identifying the traits of unknown models is important, Facebook research lead Tal Hassner tells The Verge, because deepfake software is extremely easy to customize. This potentially allows bad actors to cover their tracks if investigators were trying to trace their activity.
“Let’s assume a bad actor is generating lots of different deepfakes and uploads them on different platforms to different users,” says Hassner. “If this is a new AI model nobody’s seen before, then there’s very little that we could have said about it in the past. Now, we’re able to say, ‘Look, the picture that was uploaded here , the picture that was uploaded there , all of them came from the same model.’ And if we were able to seize the laptop or computer [used to generate the content], we will be able to say, ‘This is the culprit.’” Hassner compares the work to forensic techniques used to identify which model of camera was used to take a picture by looking for patterns in the resulting image. “Not everybody can create their own camera, though,” he says. “Whereas anyone with a reasonable amount of experience and standard computer can cook their own model that generates deepfakes.” Not only can the resulting algorithm fingerprint the traits of a generative model, but it can also identify which known model created an image and whether an image is a deepfake in the first place. “On standard benchmarks, we get state-of-the-art results,” says Hassner.
Deepfake detection is still an “unsolved problem” But it’s important to note that even these state-of-the-art results are far from reliable. When Facebook held a deepfake detection competition last year, the winning algorithm was only able to detect AI-manipulated videos 65.18 percent of the time. Researchers involved said that spotting deepfakes using algorithms is still very much an “unsolved problem.” Part of the reason for this is that the field of generative AI is extremely active. New techniques are published every day, and it’s nearly impossible for any filter to keep up.
Those involved in the field are keenly aware of this dynamic, and when asked if publishing this new fingerprinting algorithm will lead to research that can go undetected by these methods, Hassner agrees. “I would expect so,” he says. “This is a cat and mouse game, and it continues to be a cat and mouse game.” Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics FCC greenlights superfast Wi-Fi tethering for AR and VR headsets OpenAI is opening up DALL-E 3 access Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
811 | 2,023 | "Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google — The Information" | "https://www.theinformation.com/articles/microsoft-and-openai-working-on-chatgpt-powered-bing-in-challenge-to-google" | "Exclusive: OpenAI Co-Founder Altman Plans New Venture Subscribe and Read now Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google By Aaron Holmes [email protected] om Profile and archive → Follow Aaron on Twitter Microsoft could soon get a return on its $1 billion investment in OpenAI, creator of the ChatGPT chatbot, which gives humanlike text answers to questions.
Microsoft is preparing to launch a version of its Bing search engine that uses the artificial intelligence behind ChatGPT to answer some search queries rather than just showing a list of links, according to two people with direct knowledge of the plans. Microsoft hopes the new feature, which could launch before the end of March, will help it outflank Google, its much bigger search rival.
Join now to read the full story Get Started - or - Already a subscriber? Sign in here Exclusive Exclusive startups ai Exclusive ai Exclusive ai Exclusive venture capital Exclusive startups Finance The Briefing Get Started © 2013-2023 The Information. All Rights Reserved.
" |
812 | 2,011 | "The scary truth about AI copyright is nobody knows what will happen next - The Verge" | "https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Artificial Intelligence The scary truth about AI copyright is nobody knows what will happen next The last year has seen a boom in AI models that create art, music, and code by learning from others’ work. But as these tools become more prominent, unanswered legal questions could shape the future of the field.
By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
Nov 15, 2022, 3:00 PM UTC | Comments Share this story Generative AI has had a very good year. Corporations like Microsoft, Adobe, and GitHub are integrating the tech into their products; startups are raising hundreds of millions to compete with them; and the software even has cultural clout, with text-to-image AI models spawning countless memes. But listen in on any industry discussion about generative AI, and you’ll hear, in the background, a question whispered by advocates and critics alike in increasingly concerned tones: is any of this actually legal? The question arises because of the way generative AI systems are trained. Like most machine learning software, they work by identifying and replicating patterns in data. But because these programs are used to generate code, text, music, and art, that data is itself created by humans, scraped from the web and copyright protected in one way or another.
For AI researchers in the far-flung misty past (aka the 2010s), this wasn’t much of an issue. At the time, state-of-the-art models were only capable of generating blurry, fingernail-sized black-and-white images of faces.
This wasn’t an obvious threat to humans. But in the year 2022, when a lone amateur can use software like Stable Diffusion to copy an artist’s style in a matter of hours or when companies are selling AI-generated prints and social media filters that are explicit knock-offs of living designers, questions of legality and ethics have become much more pressing.
Generative AI models are trained on copyright-protected data — is that legal? Take the case of Hollie Mengert, a Disney illustrator who found that her art style had been cloned as an AI experiment by a mechanical engineering student in Canada. The student downloaded 32 of Mengert’s pieces and took a few hours to train a machine learning model that could reproduce her style. As Mengert told technologist Andy Baio, who reported the case : “For me, personally, it feels like someone’s taking work that I’ve done, you know, things that I’ve learned — I’ve been a working artist since I graduated art school in 2011 — and is using it to create art that that [sic] I didn’t consent to and didn’t give permission for.” But is that fair? And can Mengert do anything about it? To answer these questions and understand the legal landscape surrounding generative AI, The Verge spoke to a range of experts, including lawyers, analysts, and employees at AI startups. Some said with confidence that these systems were certainly capable of infringing copyright and could face serious legal challenges in the near future. Others suggested, equally confident, that the opposite was true: that everything currently happening in the field of generative AI is legally above board and any lawsuits are doomed to fail.
“I see people on both sides of this extremely confident in their positions, but the reality is nobody knows,” Baio, who’s been following the generative AI scene closely, told The Verge.
“And anyone who says they know confidently how this will play out in court is wrong.” Andres Guadamuz, an academic specializing in AI and intellectual property law at the UK’s University of Sussex, suggested that while there were many unknowns, there were also just a few key questions from which the topic’s many uncertainties unfold. First, can you copyright the output of a generative AI model, and if so, who owns it? Second, if you own the copyright to the input used to train an AI, does that give you any legal claim over the model or the content it creates? Once these questions are answered, an even larger one emerges: how do you deal with the fallout of this technology? What kind of legal restraints could — or should — be put in place on data collection? And can there be peace between the people building these systems and those whose data is needed to create them? Let’s take these questions one at a time.
The output question: can you copyright what an AI model creates? For the first query, at least, the answer is not too difficult. In the US, there is no copyright protection for works generated solely by a machine. However, it seems that copyright may be possible in cases where the creator can prove there was substantial human input.
In September, the US Copyright Office granted a first-of-its-kind registration for a comic book generated with the help of text-to-image AI Midjourney. The comic is a complete work : an 18-page narrative with characters, dialogue, and a traditional comic book layout. And although it’s since been reported that the USCO is reviewing its decision, the comic’s copyright registration hasn’t actually been rescinded yet. It seems that one factor in the review will be the degree of human input involved in making the comic. Kristina Kashtanova, the artist who created the work, told IPWatchdog that she had been asked by the USCO “to provide details of my process to show that there was substantial human involvement in the process of creation of this graphic novel.” (The USCO itself does not comment on specific cases.) According to Guadamuz, this will be an ongoing issue when it comes to granting copyright for works generated with the help of AI. “If you just type ‘cat by van Gogh,’ I don’t think that’s enough to get copyright in the US,” he says. “But if you start experimenting with prompts and produce several images and start fine-tuning your images, start using seeds, and start engineering a little more, I can totally see that being protected by copyright.” Copyrighting an AI model’s output will likely depend on the degree of human involvement With this rubric in mind, it’s likely that the vast majority of the output of generative AI models cannot be copyright protected. They are generally churned out en masse with just a few keywords used as a prompt. But more involved processes would make for better cases. These might include controversial pieces, like the AI-generated print that won a state art fair competition.
In this case, the creator said he spent weeks honing his prompts and manually editing the finished piece, suggesting a relatively high degree of intellectual involvement.
Giorgio Franceschelli, a computer scientist who’s written on the problems surrounding AI copyright, says measuring human input will be “especially true” for deciding cases in the EU. And in the UK — the other major jurisdiction of concern for Western AI startups — the law is different yet again. Unusually, the UK is one of only a handful of nations to offer copyright for works generated solely by a computer , but it deems the author to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” Again, there’s room for multiple readings (would this “person” be the model’s developer or its operator?), but it offers precedence for some sort of copyright protection to be granted.
Ultimately, though, registering copyright is only a first step, cautions Guadamuz. “The US copyright office is not a court,” he says. “You need registration if you’re going to sue someone for copyright infringement, but it’s going to be a court that decides whether or not that’s legally enforceable.” The input question: can you use copyright-protected data to train AI models? For most experts, the biggest questions concerning AI and copyright relate to the data used to train these models. Most systems are trained on huge amounts of content scraped from the web; be that text, code, or imagery. The training dataset for Stable Diffusion, for example — one of the biggest and most influential text-to-AI systems — contains billions of images scraped from hundreds of domains ; everything from personal blogs hosted on WordPress and Blogspot to art platforms like DeviantArt and stock imagery sites like Shutterstock and Getty Images. Indeed, training datasets for generative AI are so vast that there’s a good chance you’re already in one (there’s even a website where you can check by uploading a picture or searching some text ).
The justification used by AI researchers, startups, and multibillion-dollar tech companies alike is that using these images is covered (in the US, at least) by fair use doctrine , which aims to encourage the use of copyright-protected work to promote freedom of expression.
When deciding if something is fair use, there are a number of considerations, explains Daniel Gervais, a professor at Vanderbilt Law School who specializes in intellectual property law and has written extensively on how this intersects with AI. Two factors, though, have “much, much more prominence,” he says. “What’s the purpose or nature of the use and what’s the impact on the market.” In other words: does the use-case change the nature of the material in some way (usually described as a “transformative” use), and does it threaten the livelihood of the original creator by competing with their works? Training a generative AI on copyright-protected data is likely legal, but you could use that same model in illegal ways Considering the onus placed on these factors, Gervais says “it is much more likely than not” that training systems on copyrighted data will be covered by fair use. But the same cannot necessarily be said for generating content. In other words: you can train an AI model using other people’s data, but what you do with that model might be infringing. Think of it as the difference between making fake money for a movie and trying to buy a car with it.
Consider the same text-to-image AI model deployed in different scenarios. If the model is trained on many millions of images and used to generate novel pictures, it’s extremely unlikely that this constitutes copyright infringement. The training data has been transformed in the process, and the output does not threaten the market for the original art. But, if you fine-tune that model on 100 pictures by a specific artist and generate pictures that match their style, an unhappy artist would have a much stronger case against you.
“If you give an AI 10 Stephen King novels and say, ‘Produce a Stephen King novel,’ then you’re directly competing with Stephen King. Would that be fair use? Probably not,” says Gervais.
Crucially, though, between these two poles of fair and unfair use, there are countless scenarios in which input, purpose, and output are all balanced differently and could sway any legal ruling one way or another.
Ryan Khurana, chief of staff at generative AI company Wombo, says most companies selling these services are aware of these differences. “Intentionally using prompts that draw on copyrighted works to generate an output [...] violates the terms of service of every major player,” he told The Verge over email. But, he adds, “enforcement is difficult,” and companies are more interested in “coming up with ways to prevent using models in copyright violating ways [...] than limiting training data.” This is particularly true for open-source text-to-image models like Stable Diffusion, which can be trained and used with zero oversight or filters. The company might have covered its back, but it could also be facilitating copyright-infringing uses.
Another variable in judging fair use is whether or not the training data and model have been created by academic researchers and nonprofits. This generally strengthens fair use defenses and startups know this. So, for example, Stability AI, the company that distributes Stable Diffusion, didn’t directly collect the model’s training data or train the models behind the software. Instead, it funded and coordinated this work by academics and the Stable Diffusion model is licensed by a German university.
This lets Stability AI turn the model into a commercial service (DreamStudio) while keeping legal distance from its creation.
Baio has dubbed this practice “ AI data laundering.
” He notes that this method has been used before with the creation of facial recognition AI software, and points to the case of MegaFace, a dataset compiled by researchers from the University of Washington by scraping photos from Flickr. “The academic researchers took the data, laundered it, and it was used by commercial companies,” says Baio. Now, he says, this data — including millions of personal pictures — is in the hands of “[facial recognition firm] Clearview AI and law enforcement and the Chinese government.” Such a tried-and-tested laundering process will likely help shield the creators of generative AI models from liability as well.
There’s a last twist to all this, though, as Gervais notes that the current interpretation of fair use may actually change in the coming months due to a pending Supreme Court case involving Andy Warhol and Prince.
The case involves Warhol’s use of photographs of Prince to create artwork. Was this fair use, or is it copyright infringement? “The Supreme Court doesn’t do fair use very often, so when they do, they usually do something major. I think they’re going to do the same here,” says Gervais. “And to say anything is settled law while waiting for the Supreme Court to change the law is risky.” How can artists and AI companies make peace? Even if the training of generative AI models is found to be covered by fair use, that will hardly solve the field’s problems. It won’t placate the artists angry their work has been used to train commercial models, nor will it necessarily hold true across other generative AI fields, like code and music. With this in mind, the question is: what remedies can be introduced, technical or otherwise, to allow generative AI to flourish while giving credit or compensation to the creators whose work makes the field possible? The most obvious suggestion is to license the data and pay its creators. For some, though, this will kill the industry. Bryan Casey and Mark Lemley, authors of “ Fair Learning ,” a legal paper that has become the backbone of arguments touting fair use for generative AI, say training datasets are so large that “there is no plausible option simply to license all of the underlying photographs, videos, audio files, or texts for the new use.” Allowing any copyright claim, they argue, is “tantamount to saying, not that copyright owners will get paid, but that the use won’t be permitted at all.” Permitting “fair learning,” as they frame it, not only encourages innovation but allows for the development of better AI systems.
Others, though, point out that we’ve already navigated copyright issues of comparable scale and complexity and can do so again. A comparison invoked by several experts The Verge spoke to was the era of music piracy, when file-sharing programs were built on the back of massive copyright infringement and prospered only until there were legal challenges that led to new agreements that respected copyright.
“So, in the early 2000s, you had Napster, which everybody loved but was completely illegal. And today, we have things like Spotify and iTunes,” Matthew Butterick, a lawyer currently suing companies for scraping data to train AI models, told The Verge earlier this month.
“And how did these systems arise? By companies making licensing deals and bringing in content legitimately. All the stakeholders came to the table and made it work, and the idea that a similar thing can’t happen for AI is, for me, a little catastrophic.” Companies and researchers are already experimenting with ways to compensate creators Wombo’s Ryan Khurana predicted a similar outcome. “Music has by far the most complex copyright rules because of the different types of licensing, the variety of rights-holders, and the various intermediaries involved,” he told The Verge.
“Given the nuances [of the legal questions surrounding AI], I think the entire generative field will evolve into having a licensing regime similar to that of music.” Other alternatives are also being trialled. Shutterstock, for example, says it plans to set up a fund to compensate individuals whose work it’s sold to AI companies to train their models, while DeviantArt has created a metadata tag for images shared on the web that warns AI researchers not to scrape their content.
(At least one small social network, Cohost, has already adopted the tag across its site and says if it finds that researchers are scraping its images regardless, it “won’t rule out legal action.”) These approaches, though, have met with mixed responses from artistic communities. Can one-off license fees ever compensate for lost livelihood? And how does a no-scraping tag deployed now help artists whose work has already been used to train commercial AI system? For many creators, it seems the damage has already been done. But AI startups are at least suggesting new approaches for the future. One obvious step forward is for AI researchers to simply create databases where there is no possibility of copyright infringement — either because the material has been properly licensed or because it’s been created for the specific purpose of AI training. One such example is “ The Stack” — a dataset for training AI designed to specifically avoid accusations of copyright infringement. It includes only code with the most permissive possible open-source licensing and offers developers an easy way to remove their data on request. Its creators say their model could be used throughout the industry.
“The Stack’s approach can absolutely be adapted to other media,” Yacine Jernite, Machine Learning & Society lead at Hugging Face, which helped create The Stack in collaboration with partner ServiceNow, told The Verge.
“It is an important first step in exploring the wide range of mechanisms that exist for consent — mechanisms that work at their best when they take the rules of the platform that the AI training data was extracted from into account.” Jernite says Hugging Face wants to help create a “fundamental shift” in how the creators are treated by AI researchers. But so far, the company’s approach remains a rarity.
What happens next? Regardless of where we land on these legal questions, the various actors in the generative AI field are already gearing up for… something. The companies making millions from this tech are entrenching themselves: repeatedly declaring that everything they’re doing is legal (while presumably hoping no one actually challenges this claim). On the other side of no man’s land, copyright holders are staking out their own tentative positions without quite committing themselves to action. Getty Images recently banned AI content because of the potential legal risk to customers (“I don’t think it’s responsible. I think it could be illegal,” CEO Craig Peters told The Verge last month) while music industry trade org RIAA declared that AI-powered music mixers and extractors are infringing members’ copyright (though they didn’t go so far as to launch any actual legal challenges).
The first shot in the AI copyright wars has already been fired, though, with the launch last week of a proposed class action lawsuit against Microsoft, GitHub, and OpenAI.
The case accuses all three companies of knowingly reproducing open-source code through the AI coding assistant, Copilot, but without the proper licenses. Speaking to The Verge last week, the lawyers behind the suit said it could set a precedent for the entire generative AI field (though other experts disputed this, saying any copyright challenges involving code would likely be separate from those involving content like art and music).
“Once someone breaks cover, though, I think the lawsuits are going to start flying left and right.” Guadamuz and Baio, meanwhile, both say they’re surprised there haven’t been more legal challenges yet. “Honestly, I am flabbergasted,” says Guadamuz. “But I think that’s in part because these industries are afraid of being the first one [to sue] and losing a decision. Once someone breaks cover, though, I think the lawsuits are going to start flying left and right.” Baio suggested one difficulty is that many people most affected by this technology — artists and the like — are simply not in a good position to launch legal challenges. “They don’t have the resources,” he says. “This sort of litigation is very expensive and time-consuming, and you’re only going to do it if you know you’re going to win. This is why I’ve thought for some time that the first lawsuits around AI art will be from stock image sites. They seem poised to lose the most from this technology, they can clearly prove that a large amount of their corpus was used to train these models, and they have the funding to take it to court.” Guadamuz agrees. “Everyone knows how expensive it’s going to be,” he says. “Whoever sues will get a decision in the lower courts, then they will appeal, then they will appeal again, and eventually, it could go all the way to the Supreme Court.” Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Artificial Intelligence Universal Music sues AI company Anthropic for distributing song lyrics OpenAI is opening up DALL-E 3 access YouTube might make an official way to create AI Drake fakes The world’s biggest AI models aren’t very transparent, Stanford study says Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |