Items tagged with: googles
HN Discussion: https://news.ycombinator.com/item?id=19687237
Posted by aaronbrethorst (karma: 50449)
Post stats: Points: 124 - Comments: 157 - 2019-04-17T23:12:24Z
#HackerNews #case #googles #headquarters #hits #measles #silicon #valley
HackerNewsBot debug: Calculated post rank: 135 - Loop: 50 - Rank min: 100 - Author rank: 80
Investigators have been tapping into the tech giant’s enormous cache of location information in an effort to solve crimes. Here’s what this database is and what it does.
Article word count: 702
HN Discussion: https://news.ycombinator.com/item?id=19656923
Posted by pseudolus (karma: 17946)
Post stats: Points: 94 - Comments: 55 - 2019-04-14T01:21:41Z
#HackerNews #boon #enforcement #for #googles #law #sensorvault
Investigators have been tapping into the tech giant’s enormous cache of location information in an effort to solve crimes. Here’s what this database is and what it does.
The headquarters of Google in Manhattan.CreditCreditJohn Taggart for The New York Times
By Jennifer Valentino-DeVries
Law enforcement officials across the country have been seeking information from a Google database called Sensorvault — a trove of detailed location records involving at least hundreds of millions of devices worldwide, The New York Times found.
Though the new technique can identify suspects near crimes, it runs the risk of sweeping up innocent bystanders, highlighting the impact that companies’ mass collection of data can have on people’s lives.
The Sensorvault database is connected to a Google service called Location History. The feature, begun in 2009, involves Android and Apple devices.
Location History is not on by default. Google prompts users to enable it when they are setting up certain services — traffic alerts in Google Maps, for example, or group images tied to location in Google Photos.
If you have Location History turned on, Google will collect your data as long as you are signed in to your account and have location-enabled Google apps on your phone. The company can collect the data even when you are not using your apps, if your phone settings allow that.
Google says it uses the data to target ads and measure how effective they are — checking, for instance, when people go into an advertiser’s store. The company also uses the information in an aggregated, anonymized form to figure out when stores are busy and to provide traffic estimates. And those who enable Location History can see a timeline of their activities and get recommendations based on where they have been. Google says it does not sell or share the data with advertisers or other companies.
Yes. Google can also gather location information when you conduct searches or use Google apps that have location enabled. If you are signed in, this data is associated with your account.
The Associated Press reported last year that this data, called Web & App Activity, is collected even if you do not have Location History turned on. It is kept in a different database from Sensorvault, Google says.
To see some of the information in your Location History, you can look at your timeline. This map of your travels does not include all of your Sensorvault data, however.
Raw location data from mobile devices can be messy and sometimes incorrect. But computers can make good guesses about your likely path, and about which locations are most important. This is what you see on your timeline. To review all of your Location History, you can download your data from Google. To do that, go to Takeout.Google.com and select Location History. You can follow a similar procedure to download your Web & App Activity on that page.
Your Location History data will appear in computer code. If you can’t read code, you can select the “JSON” format and put the file into a text editor to see what it looks like.
Yes. The process varies depending on whether you are on a phone or computer. In its Help Center, Google provides instructions on disabling or deleting Location History and Web & App Activity.
For years, police detectives have given Google warrants seeking location data tied to specific users’ accounts.
But the new warrants, often called “geofence” requests, instead specify an area near a crime. Google looks in Sensorvault for any devices that were there at the right time and provides that information to the police.
Google first labels the devices with anonymous ID numbers, and detectives look at locations and movement patterns to see if any appear relevant to the crime. Once they narrow the field to a few devices, Google reveals information such as names and email addresses.
Jennifer Valentino-DeVries is a reporter on the investigative team, specializing in technology coverage. Before joining The Times, she worked at The Wall Street Journal and helped to launch the Knight First Amendment Institute at Columbia University. @jenvalentino
A version of this article appears in print on , on Page A19 of the New York edition with the headline: Google’s Sensorvault: Here’s How It Works. Order Reprints | Today’s Paper | Subscribe
HackerNewsBot debug: Calculated post rank: 81 - Loop: 102 - Rank min: 80 - Author rank: 78
One member resigned and two more are under fire. It’s only a week old.
Article word count: 1490
HN Discussion: https://news.ycombinator.com/item?id=19567290
Posted by cpeterso (karma: 29695)
Post stats: Points: 97 - Comments: 126 - 2019-04-03T21:10:19Z
#HackerNews #already #apart #board #ethics #falling #googles #new
The Google office in Berlin, at its opening in January 2019. Carsten Koall/Getty Images
Just a week after it was announced, Google’s new AI ethics board is already in trouble.
The board, founded to guide “responsible development of AI” at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more.
Of the eight people listed in Google’s initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won’t serve, and two others are the subject of petitions calling for their removal — Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James’s removal.
James and Gibbens are two of the three women on the board. The third, Joanna Bryson, was asked if she was comfortable serving on a board with James, and answered, “Believe it or not, I know worse about one of the other people.”
Altogether, it’s not the most promising start for the board.
The whole situation is embarrassing to Google, but it also illustrates something deeper: AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.
A role on Google’s AI board is an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal — in a way that suggests Google is treating AI ethics more like a PR problem than a substantive one.
Nearly half the board has resigned or is under fire
Google announced their AI ethics board last week, saying the board would “consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.”
From the start, the board attracted criticism. Many people were outraged about the inclusion of Kay Coles James, the Heritage Foundation president.
“In selecting James, Google is making clear that its version of ‘ethics’ values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants,” argues an open letter signed by more than 1,800 Google employees. A particular cause for concern was James’s stance that the trans rights movement is seeking to “change the definition of women to include men” in order to “erase” women’s rights.
Today @heritage will critique gender identity @UN_CSW because powerful nations are pressing for the radical redefining of sex. If they can change the definition of women to include men, they can erase efforts to empower women economically, socially, and politically. #CSW63 — Kay Coles James (@KayColesJames) March 20, 2019
“Google cannot claim to support trans people and its trans employees — a population that faces real and material threats — and simultaneously appoint someone committed to trans erasure to a key AI advisory position,” concludes the open letter.
Others called on Google to remove Dyan Gibbens from the board. Gibbens is the CEO of Trumbull Unmanned, a drone technology company, and she previously worked on drones for the US military. A year ago, Google employees were outraged when it was revealed that the company had been working with the US military on drone technology as part of so-called Project Maven. With employees resigning in protest, Google promised not to renew Maven. Collaborating with the military on drone technology remains a touchy subject internally, and one where many Google employees don’t have a lot of trust in Google leadership.
On Saturday, Alessandro Acquisti, the privacy researcher, announced his resignation from the panel, saying, “I’d like to share that I’ve declined the invitation to the ATEAC [Advanced Technology External Advisory Council ] council. While I’m devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don’t believe this is the right forum for me to engage in this important work.”
Even before the outrage, this panel was not set up for success
But the collapse of Google’s panel and the controversy over its make-up almost obscures a deeper problem: This was not an entity set up to do a good job.
Google’s announcement states that the panel would serve over the course of 2019, and meet four times. That’s just not very much time together, given the complexity of the issues members will be advising on. It’s not enough time to hear about even a fraction of Google’s ongoing projects, which suggests the board won’t be giving advice on those.
Second, the board positions are unpaid. Some have contended that a paid oversight committee would be worse, because board members would be indebted to Google, but others think unpaid board positions advantage the independently wealthy. These critics see the unpaid positions as another sign that Google isn’t taking the AI ethics board very seriously, and that the company doesn’t expect members to spend much time on it, either.
Next, the ethics panel — as has been the case with ethics panels at other top tech companies — does not have the power to do anything. Google says “we hope this effort will inform both our own work and the broader technology sector,” but it’s very unclear who, if anyone, at Google will rely on these recommendations and which decisions the board will get to make recommendations about.
Overall, it’s not clear whether the panel will be used for guidance on internal Google matters at all. What it definitely will be used for is PR.
Panel member Joanna Bryson, defending Coles James’s inclusion on Twitter, said, “I know that I have pushed [Google]before on some of their associations, and they say they need diversity in order to be convincing to society broadly, e.g. the GOP.” This makes sense as a strategic priority for Google, whose products, of course, are used by nearly everyone.
But if Google’s goal with the panel is to “be convincing to society broadly” without necessarily changing anything the company does, that’s not really AI ethics — it’s AI marketing.
And fundamentally, that’s what’s wrong with AI ethics panels. Google is not the only tech company to have one, and while Microsoft’s AI ethics committee and Facebook’s center for ethics research have not been embroiled in quite as much drama, they don’t have official decision-making power, either.
Ethical deployment of powerful emerging technologies involves tough decisions. Will a company work with Immigration and Customs Enforcement (ICE)? Or with the Chinese government on technology that aids it in its ongoing, horrifying campaign to imprison a million Uighurs? If a facial recognition tool works better on white Americans than black Americans, what does it mean to fairly deploy it? If AI is creating and exacerbating inequalities, what’s the plan to tackle them? If a line of AI research looks dangerous — if some experts are warning it could have catastrophic effects for the world — will Google pursue it anyway?
“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.
All of those calls have to be made at the highest level of the company. Google quite reasonably doesn’t want to give control of these decisions to outsiders, but that means that the people tasked with providing guidance on AI ethics are removed from the context where key AI policy decisions will happen. A better panel would contain both decision makers at Google and outside voices; would issue formal, specific, detailed recommendations; and would announce publicly whether Google followed them.
Neither Google nor anyone else appears actually comfortable with meaningful external oversight. Neither Google nor anyone else seems to have a principled or systematic way to handle the power it has stumbled into. That’s why companies are formulating these panels with goals like “be convincing to society broadly” — as Google aimed for with the inclusion of James — rather than “review the process for approving collaborations with the U.S. military.” The brouhaha has convinced me that Google needs an AI ethics board quite badly — but not the kind it seems to want to try to build.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good
HackerNewsBot debug: Calculated post rank: 106 - Loop: 303 - Rank min: 100 - Author rank: 49
Google's product support has become a joke, and the company should be very concerned.
Article word count: 3164
HN Discussion: https://news.ycombinator.com/item?id=19553294
Posted by vanburen (karma: 787)
Post stats: Points: 124 - Comments: 103 - 2019-04-02T12:28:21Z
#HackerNews #are #brand #constant #damaging #googles #its #product #shutdowns
An artistʼs rendering of Googleʼs current reputation.
Enlarge / An artistʼs rendering of Googleʼs current reputation.
Itʼs only April, and 2019 has already been an absolutely brutal year for Googleʼs product portfolio. The Chromecast Audio was discontinued January 11. YouTube annotations were removed and deleted January 15. Google Fiber packed up and left a Fiber city on February 8. Android Things dropped IoT support on February 13. Googleʼs laptop and tablet division was reportedly slashed on March 12. Google Allo shut down on March 13. The "Spotlight Stories" VR studio closed its doors on March 14. The goo.gl URL shortener was cut off from new users on March 30. Gmailʼs IFTTT support stopped working March 31.
And today, April 2, weʼre having a Google Funeral double-header: both Google+ (for consumers) and Google Inbox are being laid to rest. Later this year, Google Hangouts "Classic" will start to wind down, and somehow also scheduled for 2019 is Google Musicʼs "migration" to YouTube Music, with the Google service being put on death row sometime afterward.
We are 91 days into the year, and so far, Google is racking up an unprecedented body count. If we just take the official shutdown dates that have already occurred in 2019, a Google-branded product, feature, or service has died, on average, about every nine days.
Some of these product shutdowns have transition plans, and some of them (like Google+) represent Google completely abandoning a user base. The specifics arenʼt crucial, though. What matters is that every single one of these actions has a negative consequence for Googleʼs brand, and the near-constant stream of shutdown announcements makes Google seem more unstable and untrustworthy than it has ever been. Yes, there was the one time Google killed Google Wave nine years ago or when it took Google Reader away six years ago, but things were never this bad.
For a while there has been a subset of people concerned about Googleʼs privacy and antitrust issues, but now Google is eroding trust that its existing customers have in the company. Thatʼs a huge problem. Google has significantly harmed its brand over the last few months, and Iʼm not even sure the company realizes it.
Google products require trust and investment
The latest batch of dead and dying Google apps.
Enlarge / The latest batch of dead and dying Google apps.
Google is a platform company. Be it cloud compute, app and extension ecosystems, developer APIs, advertising solutions, operating-system pre-installs, or the storage of user data, Google constantly asks for investment from consumers, developers, and partner companies in the things it builds. Any successful platform will pretty much require trust and buy-in from these groups. These groups need to feel the platform they invest in today will be there tomorrow, or theyʼll move on to something else. If any of these groups loses faith in Google, it could have disastrous effects for the company.
Consumers want to know the photos, videos, and emails they upload to Google will stick around. If you buy a Chromecast or Google Home, you need to know the servers and ecosystems they depend on will continue to work, so they donʼt turn into fancy paperweights tomorrow. If you take the time to move yourself, your friends, and your family to a new messaging service, you need to know it wonʼt be shut down two years later. If you begrudgingly join a new social network that was forced down your throat, you need to know it wonʼt leak your data everywhere, shut down, and delete all your posts a few years later.
There are also enterprise customers, who, above all, like safe bets with established companies. The old adage of "Nobody ever got fired for buying IBM" is partly a reference for the enterpriseʼs desire for a stable, steady, reliable tech partner. Google is trying to tackle this same market with its paid G Suite program, but the most it can do in terms of stability is post a calendar detailing the rollercoaster of consumer-oriented changes coming down the pipeline. Thereʼs a slower "Scheduled release track" that delays the rollout of some features, but things like a complete revamp of Gmail eventually all still arrive. G Suite has a "Core Services" list meant to show confidence in certain products sticking around, but some of the entries there, like Hangouts and Google Talk, still get shut down.
Developers gamble on a platformʼs stability even more than consumers do. Consumers might trust a service with their data or spend money on hardware, but developers can spend months building an app for a platform. They need to read documentation, set up SDKs, figure out how APIs work, possibly pay developer startup fees, and maybe even learn a new language. They wonʼt do any of this if they donʼt have faith in the long-term stability of the platform.
Developers can literally build their products around paid-access Google APIs like the Google Maps API, and when Google does things like raise the price of the Maps API by 14x for some use cases, it is incredibly disruptive for those businesses and harmful to Googleʼs brand. When apps like Reddit clients are flagged by Google Play "every other month" for the crime of displaying user-generated content and when itʼs impossible to talk to a human at Google about anything, developers are less likely to invest in your schizophrenic ecosystem.
Hardware manufacturers and other company partners need to be able to trust a company, too. Google constantly asks hardware developers to build devices dependent on its services. These are things like Google Assistant-compatible speakers and smart displays, devices with Chromecast built in, and Android and Chrome OS devices. Manufacturers need to know a certain product or feature they are planning to integrate will be around for years, since they need to both commit to a potentially multi-year planning and development cycle, and then it needs to survive long enough for customers to be supported for a few years. Watching Android Things chop off a major segment of its market nine months after launch would certainly make me nervous to develop anything based on Android Things. Imagine the risk Volvo is taking by integrating the new Android Auto OS into its upcoming Polestar 2: vehicles need around five years of development time and still need to be supported for several years after launch.
Google’s shutdowns cast a shadow over the entire company
With so many shutdowns, tracking Googleʼs bodycount has become a competitive industry on the Internet. Over on Wikipedia, the list of discontinued Google products and services is starting to approach the size of the active products and services listed. There are entire sites dedicated to discontinued Google products, like killedbygoogle.com, The Google Cemetery, and didgoogleshutdown.com.
I think weʼre seeing a lot of the consequences of Googleʼs damaged brand in the recent Google Stadia launch. A game streaming platform from one of the worldʼs largest Internet companies should be grounds for excitement, but instead, the baggage of the Google brand has people asking if they can trust the service to stay running.
In addition to the endless memes and jokes youʼll see in every related comments section, youʼre starting to see Google skepticism in mainstream reporting, too. Over at The Guardian, this line makes the pullquote: "A potentially sticky fact about Google is that the company does have a habit of losing interest in its less successful projects." IGN has a whole section of a report questioning "Googleʼs Commitment." From a Digital Foundry video: "Google has this reputation for discontinuing services that are often good, out of nowhere." One of SlashGearʼs "Stadia questions that need answers" is "Can I trust you, Google?"
Googleʼs Phil Harrison talks about the new Google Stadia controller.
Enlarge / Googleʼs Phil Harrison talks about the new Google Stadia controller.
One of my favorite examples came from a Kotaku interview with Phil Harrison, the leader of Google Stadia. In an audio interview, the site lays this whopper of a question on him: "One of the sentiments we saw in our comments section a lot is that Google has a long history of starting projects and then abandoning them. Thereʼs a worry, I think, from users who might think that Google Stadia is a cool platform, but if Iʼm connecting to this and spending money on this platform, how do I know for sure that Google is still sticking with it for two, three, five years? How can you guys make a commitment that Google will be sticking with this in a way that they havenʼt stuck with Google+, or Google Hangouts, or Google Fiber, Reader, or all the other things Google has abandoned over the years?"
Yikes. Kotaku is totally justified to ask a question like this, but to have one of your new executives face questions of "When will your new product shut down?" must be embarrassing for Google.
Harrisonʼs response to this question started with a surprisingly honest acknowledgement: "I understand the concern." Harrison, seemingly, gets it. He seemingly understands that itʼs hard to trust Google after so many product shutdowns, and he knows the Stadia team now faces an uphill battle. For the record, Harrison went on to cite Googleʼs sizable investment in the project, saying Stadia was "Not a trivial product" and was a "significant cross-company effort." (Also for the record: you could say all the same things about Google+ a few years ago, when literally every Google employee was paid to work on it. Now it is dead.)
Harrison and the rest of the Stadia team had nothing to do with the closing of Google Inbox, or the shutdown of Hangouts, or the removal of any other popular Google product. They are still forced to deal with the consequences of being associated with "Google the Product Killer," though. If Stadia was an Amazon product, I donʼt think we would see these questions of when it would shut down. Microsoftʼs game streaming service, Project xCloud, only faces questions about feasibility and appeal, not if Microsoft will get bored in two years and dump the project.
Listing image by Aurich Lawson
Googleʼs love of product shutdowns is mostly just a side effect of Googleʼs love for developing products. Calling anything a "Google Product" is usually a gross simplification—Google rarely does anything as a singular company. Instead, the industry giant is made up of autonomous product groups that develop and launch things on their own schedule. This is why Google often ends up making "Two of everything:" different teams donʼt communicate and end up tackling the same problem with different ideas.
Googleʼs strategy of having multiple teams throw things against the wall to see what sticks leads to lots and lots of products and services launching all the time, all with varying levels of quality, integration with other Google products, and varying lifetimes. It also leads to lots and lots of product cancellations.
A better way to frame launches and other decisions inside of Google is try to figure out which team inside of Google has built a product, and to view each product team as a separate entity. The Google Assistant does well, because it is run by the Google Search team. On the other side of the spectrum, we have the Google Messaging team, which—after Hangouts, Hangouts Chat, Allo, Duo, Google Voice, and Android Messages—has pretty much no credibility left at all. The Android Team is easily one of the steadiest, most reliable groups at Google. Having various teams launch whatever hardware they want was a mess until all the hardware was put under the control of a new Google Hardware division.
The Gmail team lives under the "Google Apps" umbrella, and itʼs responsible for developing and shutting down Inbox. Google Apps, with its enterprise focus, is usually a stalwart group, and Inbox is the first big shutdown from the Google Apps team in a long time. Google Fiber is not even part of Google; instead, itʼs a separate company under Googleʼs parent company, Alphabet.
Every shutdown has a story
Google+ was created as a brand-new division inside of Google, led by Vic Gundotra. Back in 2011, success in social was considered critical to Googleʼs survival, and Gundotra was given the title of "Senior Vice President." That made him one of eight or so people that regularly reported to then-CEO Larry Page. From here Google+ followed a pattern we see a few times with Google product launches and cancellations: Gundotra, the driving force behind Google+, left Google (or perhaps was compelled to leave Google) in 2014, which signaled the beginning of the end for Google+. Google+ was immediately stopped, Plusʼ more successful features were spun off, and eventually Google killed Google+ after a revelation of data security issues was made public.
Any website with traffic analytics will tell you that Google+ usage has been continually declining, but shutting down a major product due to a data leak is certainly a strange decision. I could understand if the product was being abandoned entirely, but the enterprise version of Google Plus will continue to live on. Google has even promised a redesign and new features for the enterprise version.
Hangouts was a product that never quite found a solid home inside Google. It was cooked up by the Google+ team as a way to combine all of Googleʼs other messaging services into a single app. When Plus started its death spiral, Hangouts didnʼt have an obvious home in another division at Google. Eventually, the standalone messaging team was created, but it seemed more interested in starting its own (numerous) projects than supporting a messaging app created by someone else.
Google Play Music is dying due to pretty much the same situation as Hangouts. Back in 2011, iOS had a great music solution (iTunes), while Android didnʼt. So Google Music was created by the Android team as part of the "Android Market" content store. With Web clients and plans to branch out onto iOS, the "Android Market" branding didnʼt make a ton of sense, so eventually the "Google Play" brand was born, and eventually Google Play became separate from the Android division. Now we have Googleʼs YouTube taking over a lot of Googleʼs media content strategy with all new apps, and just like Hangouts, it seems like a solid product is dying due to "not invented here" syndrome.
I could go on forever about the explanations behind Googleʼs many shutdowns. The shutdowns are all from independent teams making independent decisions, with products, employees, and divisions shifting around as time goes by. The rationale behind each shutdown doesnʼt really matter though—the problem is the cumulative effect of all these individual shutdowns on Googleʼs reputation and Googleʼs customers that, time and time again, have products taken away from them.
Maybe it’s time for a public roadmap
With all of the shutdowns already announced, Iʼm not sure thereʼs anything Google can do to help its reputation at this point. The amount of people I see still bringing up Google Readerʼs shutdown is incredible—having a frequently used Web service snatched away from you sticks with people. If people lose confidence in Googleʼs ability to host a stable lineup of services, more and more users will move out of the Google ecosystem. Then, like weʼre already seeing with Stadia, the company would face an uphill battle to get people to use its new products.
Iʼve been promoting a "wait and see" approach for most new Google products since at least 2016. But to see Googleʼs support now become the subject of punchlines on the Internet should be extremely concerning for Google.
One thing that could placate Google users is for the company to just tell us what is going on. Google already makes support promises for some of its products. Pixel phones and Chromebooks both have dashboards that show promised support windows and public end-of-life dates. Meanwhile, Google already hosts various uptime pages and other statistics. I want communication from Google that says which products will be around for a long time and which are a low priority at the company. Would it be so hard to publicly commit to running Stadia for five years no matter what? For its more successful products, Google could commit to 10 years of running a service and update the dashboard from time to time with later dates.
I realize most companies donʼt do this, but most companies donʼt have the reputation Google has for killing products. It makes sense to counter the memes of "haha, how long until Google discontinues this product?" with a public statement of "not for at least seven years." We just want to see a damn product roadmap, Google. Give us a list of "Long Term Support (LTS)" products.
Google posts public support timelines for Pixel phones, why not products and services, too?
Enlarge / Google posts public support timelines for Pixel phones, why not products and services, too?
Google likes to experiment, but it needs to be better at communicating what products will be around for a while and which ones will be thrown against the wall to see what sticks. Sometimes Google is good with this kind of communication. The recent launch of Googleʼs Reply app was handled well, for example. Google called the service "an experiment," and it was from a new skunkworks inside Google called "Area 120." Everything about the service made it sound like a temporary testing ground, and when the product was shut down, Googleʼs messaging was great: "Reply was an experiment, and that experiment has now ended." This was a fine way to go about things.
By contrast, nothing about the launch of Google Inbox made it sound like a product that would only stick around for a few years. Inbox was "years in the making," and the blog post made it seem like Googleʼs email client for the future.
As it stands now, products that were the center of the company a few years ago (RIP, Google+) are on the chopping block in 2019, and Google seems ready to kill any product that doesnʼt have a billion daily active users. Without knowing the reason behind this wave of shutdowns (was there some new mandate inside the company to trim down?), nothing from Google seems safe anymore.
Google neglected to mention Google Voice in its last big messaging update. Should we read into that? Wazeʼs features are slowly being moved over to Google Maps. Is that a bad sign? (Android) Wear OS is basically in last place in the smartwatch wars. Nest doesnʼt make a profit and recently was stripped of its Google independence. Googleʼs Fuchsia OS is staring down an expensive multi-year development cycle, and the supposed plan to replace Android will be a steep uphill battle. How confident are you that all of these products will be around in a few years?
Every time Google shuts down a product, its reputation is harmed. A shutdown makes users feel betrayed, it makes trusting other Google services harder, and it makes it harder for Google to pitch new products to users. With so many shutdowns happening lately, Iʼve got to wonder if Google users will start to seek similar services from companies that simply seem more stable.
HackerNewsBot debug: Calculated post rank: 117 - Loop: 95 - Rank min: 100 - Author rank: 77
Dunford said he was concerned that the work Google was doing with China on AI was undermining the U.S military.
Article word count: 398
HN Discussion: https://news.ycombinator.com/item?id=19464724
Posted by Jerry2 (karma: 14854)
Post stats: Points: 123 - Comments: 78 - 2019-03-22T17:24:57Z
#HackerNews #advantage #china #dunford #eroding #googles #military #says #with #work
Joint Chiefs Chairman Gen. Joseph Dunford said Thursday that he would likely be meeting next week with Google executives on his concerns that the work Google was doing with China on artificial intelligence and other technologies was undermining the U.S military.
"This is not about me and Google, this about us looking at the second and third order effects of our business ventures in China [and]the impact itʼs going to have on U.S. ability to maintain a competitive military advantage and all that goes with it," Dunford said.
Dunford said he had general concerns about other U.S. business ventures in China, but "In the case of Google, they were highlighted because they have an artificial intelligence venture in China."
U.S. companies must realize that in doing business with China, "they are automatically required to have a cell of the Chinese Communist Party (CCP) in that company and that itʼs going to lead to that intellectual property from that company finding its way to the Chinese military," Dunford said. "Thereʼs a distinction without a difference between the CCP and the government and the Chinese military."
Historically, one of the reasons for the U.S. maintaining a military advantage over other nations has been enduring partnerships between the Pentagon and industry, and Chinese President Xi Jinping has taken a similar path in Chinaʼs effort to erase the U.S. advantage, Dunford said.
Unless precautions are taken, U.S. business ventures in China could "enable the Chinese military to take advantage of the technology developed in the United States," Dunford said.
The remarks at the Atlantic Council event echoed those expressed by Dunford and Acting Defense Secretary Patrick Shanahan at a Senate Armed Services Committee hearing last week on Google and other firms doing business in China while showing reluctance to work with the U.S. military.
Last year, Google announced that it would not renew a contract with the Pentagon for artificial intelligence work, following protests from employees who charged that the technology could be used for lethal purposes.
At the Senate hearing, Shanahan said that Google has shown "a lack of willingness to support DoD programs."
He added that China often uses technology developed in the private sector for military purposes.
"The technology that is developed in the civil world transfers to the military world; itʼs a direct pipeline," Shanahan said.
-- Richard Sisk can be reached at firstname.lastname@example.org.
Show Full Article
HackerNewsBot debug: Calculated post rank: 108 - Loop: 179 - Rank min: 100 - Author rank: 92
Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. The problem? Google never told users its product had a microphone to begin with. Simple…
Article word count: 1018
HN Discussion: https://news.ycombinator.com/item?id=19407147
Posted by johnisgood (karma: 194)
Post stats: Points: 94 - Comments: 39 - 2019-03-16T10:17:05Z
#HackerNews #and #fiasco #googles #harms #invades #nest #privacy #their #trust #user
Technology companies, lawmakers, privacy advocates, and everyday consumers likely disagree about exactly how a company should go about collecting user data. But, following a trust-shattering move by Google last month regarding its Nest Secure product, consensus on one issue has emerged: Companies shouldn’t ship products that can surreptitiously spy on users.
Failing to disclose that a product can collect information from users in ways they couldn’t have reasonably expected is bad form. It invades privacy, breaks trust, and robs consumers of the ability to make informed choices.
While collecting data on users is nearly inevitable in today’s corporate world, secret, undisclosed, or unpredictable data collection—or data collection abilities—is another problem.
A smart-home speaker shouldn’t be secretly hiding a video camera. A secure messaging platform shouldn’t have a government-operated backdoor. And a home security hub that controls an alarm, keypad, and motion detector shouldn’t include a clandestine microphone feature—especially one that was never announced to customers.
And yet, that is precisely what Google’s home security product includes.
Google fumbles once again
Last month, Google announced that its Nest Secure would be updated to work with Google Assistant software. Following the update, users could simply utter “Hey Google” to access voice controls on the product line-up’s “Nest Guard” device.
The main problem, though, is that Google never told users that its product had an internal microphone to begin with. Nowhere inside the Nest Guard’s hardware specs, or in its marketing materials, could users find evidence of an installed microphone.
When Business Insider broke the news, Google fumbled ownership of the problem: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs,” a Google spokesperson said. “That was an error on our part.”
Customers, academics, and privacy advocates balked at this explanation.
“This is deliberately misleading and lying to your customers about your product,” wrote Eva Galperin, director of cybersecurity at Electronic Frontier Foundation.
“Oops! We neglected to mention we’re recording everything you do while fronting as a security device,” wrote Scott Galloway, professor of marketing at the New York University Stern School of Business.
The Electronic Privacy Information Center (EPIC) spoke in harsher terms: Google’s disclosure failure wasn’t just bad corporate behavior, it was downright criminal.
“It is a federal crime to intercept private communications or to plant a listening device in a private residence,” EPIC said in a statement. In a letter, the organization urged the Federal Trade Commission to take “enforcement action” against Google, with the hope of eventually separating Nest from its parent. (Google purchased Nest in 2014 for $3.2 billion.)
Days later, the US government stepped in. The Senate Select Committee on Commerce sent a letter to Google CEO Sundar Pichai, demanding answers about the company’s disclosure failure. Whether Google was actually recording voice data didn’t matter, the senators said, because hackers could still have taken advantage of the microphone’s capability.
“As consumer technology becomes ever more advanced, it is essential that consumers know the capabilities of the devices they are bringing into their homes so they can make informed choices,” the letter said.
This isn’t just about user data
Collecting user data is essential to today’s technology companies. It powers Yelp recommendations based on a user’s location, product recommendations based on an Amazon user’s prior purchases, and search results based on a Google user’s history. Collecting user data also helps companies find bugs, patch software, and retool their products to their users’ needs.
But some of that data collection is visible to the user. And when it isn’t, it can at least be learned by savvy consumers who research privacy policies, read tech specs, and compare similar products. Other home security devices, for example, advertise the ability to trigger alarms at the sound of broken windows—a functionality that demands a working microphone.
Google’s failure to disclose its microphone prevented even the most privacy-conscious consumers from knowing what they were getting in the box. It is nearly the exact opposite approach that rival home speaker maker Sonos took when it installed a microphone in its own device.
Sonos does it better
In 2017, Sonos revealed that its newest line of products would eventually integrate with voice-controlled smart assistants. The company opted for transparency.
While this function has upset some Sonos users who want to turn off the microphone light, the company hasn’t budged.
A Sonos spokesperson said the company values its customers’ privacy because it understands that people are bringing Sonos products into their homes. Adding a voice assistant to those products, the spokesperson said, resulted in Sonos taking a transparent and plain-spoken approach.
Now compare this approach to Google’s.
Consumers purchased a product that they trusted—quite ironically—with the security of their homes, only to realize that, by purchasing the product itself, their personal lives could have become less secure. This isn’t just a company failing to disclose the truth about its products. It’s a company failing to respect the privacy of its users.
A microphone in a home security product may well be a useful feature that many consumers will not only endure but embrace. In fact, internal microphones are available in many competitor products today, proving their popularity. But a secret microphone installed without user knowledge instantly erodes trust.
As we showed in our recent data privacy report, users care a great deal about protecting their personal information online and take many steps to secure it. To win over their trust, businesses need to responsibly disclose features included in their services and products—especially those that impact the security and privacy of their customers’ lives. Transparency is key to establishing and maintaining trust online.
HackerNewsBot debug: Calculated post rank: 75 - Loop: 114 - Rank min: 60 - Author rank: 9
Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa, or Google to return your query, either correctly…
Article word count: 482
HN Discussion: https://news.ycombinator.com/item?id=19371890
Posted by Errorcod3 (karma: 3470)
Post stats: Points: 207 - Comments: 97 - 2019-03-12T19:14:49Z
#HackerNews #and #googles #instantly #new #offline #only #pixel #recognition #system #voice #works
Voice recognition is a standard part of the smartphone package these days, and a corresponding part is the delay while you wait for Siri, Alexa or Google to return your query, either correctly interpreted or horribly mangled. Google’s latest speech recognition works entirely offline, eliminating that delay altogether — though of course mangling is still an option.
The delay occurs because your voice, or some data derived from it anyway, has to travel from your phone to the servers of whoever operates the service, where it is analyzed and sent back a short time later. This can take anywhere from a handful of milliseconds to multiple entire seconds (what a nightmare!), or longer if your packets get lost in the ether.
Why not just do the voice recognition on the device? There’s nothing these companies would like more, but turning voice into text on the order of milliseconds takes quite a bit of computing power. It’s not just about hearing a sound and writing a word — understanding what someone is saying word by word involves a whole lot of context about language and intention.
Your phone could do it, for sure, but it wouldn’t be much faster than sending it off to the cloud, and it would eat up your battery. But steady advancements in the field have made it plausible to do so, and Google’s latest product makes it available to anyone with a Pixel.
[IMG]Google’s work on the topic, documented in a paper here, built on previous advances to create a model small and efficient enough to fit on a phone (it’s 80 megabytes, if you’re curious), but capable of hearing and transcribing speech as you say it. No need to wait until you’ve finished a sentence to think whether you meant “their” or “there” — it figures it out on the fly.
So what’s the catch? Well, it only works in Gboard, Google’s keyboard app, and it only works on Pixels, and it only works in American English. So in a way this is just kind of a stress test for the real thing.
“Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application,” writes Google, as if it is the trends that need to do the hard work of localization.
Making speech recognition more responsive, and to have it work offline, is a nice development. But it’s sort of funny considering hardly any of Google’s other products work offline. Are you going to dictate into a shared document while you’re offline? Write an email? Ask for a conversion between liters and cups? You’re going to need a connection for that! Of course this will also be better on slow and spotty connections, but you have to admit it’s a little ironic.
HackerNewsBot debug: Calculated post rank: 170 - Loop: 291 - Rank min: 100 - Author rank: 41
To continue, please click the box below to let us know you're not a robot.
HN Discussion: https://news.ycombinator.com/item?id=19362755
Posted by smacktoward (karma: 38928)
Post stats: Points: 112 - Comments: 46 - 2019-03-11T20:57:37Z
#HackerNews #150m #allegedly #award #gave #googles #page #rubin #stock
To continue, please click the box below to let us know youʼre not a robot.
HackerNewsBot debug: Calculated post rank: 90 - Loop: 259 - Rank min: 80 - Author rank: 65
It looks like Facebook is not the only one abusing Apple’s system for distributing employee-only apps to sidestep the App Store and collect extensive data on users. Google has been running an app…
Article word count: 634
HN Discussion: https://news.ycombinator.com/item?id=19038258
Posted by minimaxir (karma: 32180)
Post stats: Points: 230 - Comments: 84 - 2019-01-30T19:17:03Z
\#HackerNews #also #apples #back #collector #data #door #googles #peddling #through
It looks like Facebook is not the only one abusing Apple’s system for distributing employee-only apps to sidestep the App Store and collect extensive data on users. Google has been running an app called Screenwise Meter, which bears a strong resemblance to the app distributed by Facebook Research that has now been barred by Apple, TechCrunch has learned.
In its app, Google invites users aged 18 and up (or 13 if part of a family group) to download the app by way of a special code and registration process using an Enterprise Certificate. That’s the same type of policy violation that led Apple to shut down Facebook’s similar Research VPN iOS app, which had the knock-on effect of also disabling usage of Facebook’s legitimate employee-only apps — which run on the same Facebook Enterprise Certificate — and making Facebook look very iffy in the process.
Google’s Screenwise Meter app for iPhones. (Images: Google)
First launched in 2012, Screenwise lets users earn gift cards for sideloading an Enterprise Certificate-based VPN app that allows Google to monitor and analyze their traffic and data. Google has rebranded the program as part of the Cross Media Panel and Google Opinion Rewards programs that reward users for installing tracking systems on their mobile phone, PC web browser, router, and TV. In fact, Google actually sends participants a special router that it can monitor.
Originally, Screenwise was open to users as young as 13, just like Facebook’s Research app that’s now been shut down on iOS but remains on Android. Now, according to the site’s Panelist Eligibility rules, Google requires the primary users of its Opinion Rewards to be 18 or older, but still allows secondary panelists as young as 13 in the same household to join the program and have their devices tracked, as demonstrated in this video here (which was created in August of last year, underscoring that the program is still active):
Unlike Facebook, Google is much more upfront about how its research data collection programs work, what’s collected, and that it’s directly involved. It also gives users the option of “guest mode” for when they don’t want traffic monitored, or someone younger than 13 is using the device.
Putting the not-insignificant issues of privacy aside — in short, many people lured by financial rewards may not fully take in what it means to have a company fully monitoring all your screen-based activity — and the implications of what extent tech businesses are willing to go to to amass more data about users to get an edge on competitors, Google Screenwise Meter for iOS appears to violate Apple’s policy.
This states, in essence, that the Enterprise Certificate program for distributing apps without the App Store or Apple’s oversight is only for internal employee-only apps.
Google walks users through how to install the Enterprise Certificate and VPN on their phone. Developers seeking to do external testing on iOS are supposed to use the TestFlight system that sees apps reviewed and limits their distribution to 10,000 people.
Apple bans Facebook’s Research app that paid users for data
We have reached out both to Apple and Google for a comment on why this app is either the same, or different to the app Facebook had been distributing.
If Apple considers this a violation of its Enterprise Certificate policy, it could shut down Screenwise’s ability to run on iOS. And if it truly wanted to punish Google like it did Facebook, it could invalidate the certifications for all of Google’s legitimate apps that run using the same certificate.
That could throw a wrench into Google’s product development and daily work flow that could be more damaging than just removing one way it gathers competitive intelligence.
We’ll update this post as we learn more.
Facebook’s VPN app puts spotlight on kids’ consent
HackerNewsBot debug: Calculated post rank: 181 - Loop: 122 - Rank min: 100 - Author rank: 54