Items tagged with: response
The controversial panel lasted just a little over a week.
Article word count: 1050
HN Discussion: https://news.ycombinator.com/item?id=19578043
Posted by minimaxir (karma: 34875)
Post stats: Points: 96 - Comments: 138 - 2019-04-04T23:12:44Z
#HackerNews #board #cancels #ethics #google #outcry #response
The Google logo featured in the opening of a company office in Berlin, Germany, on January 22, 2019. Carsten Koall/Getty Images
This week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.
Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.
The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.
Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.
Board member Alessandro Acquisti resigned. Another member, Joanna Bryson, defending her decision not to resign, claimed of James, “Believe it or not, I know worse about one of the other people.” Other board members found themselves swamped with demands that they justify their decision to remain on the board.
Thursday afternoon, a Google spokesperson told Vox that the company has decided to dissolve the panel, called the Advanced Technology External Advisory Council (ATEAC), entirely. Here is the company’s statement in full:
It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.
The panel was supposed to add outside perspectives to ongoing AI ethics work by Google engineers, all of which will continue. Hopefully, the cancellation of the board doesn’t represent a retreat from Google’s AI ethics work, but a chance to consider how to more constructively engage outside stakeholders.
The board was turning into a huge liability for Google
The board’s credibility first took a hit when Alessandro Acquisti, a privacy researcher, announced on Twitter that he was stepping down, arguing, “While I’m devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don’t believe this is the right forum for me to engage in this important work.”
Meanwhile, the petition to remove Kay Coles James has attracted more than 2,300 signatures from Google employees so far and showed no signs of losing steam.
As anger about the board intensified, board members were drawn into extended ethical debates about why they were on the board, which can’t have been what Google was hoping for. On Facebook, board member Luciano Floridi, a philosopher of ethics at Oxford, mused:
Asking for [Kay Coles James’s] advice was a grave error and sends the wrong message about the nature and goals of the whole ATEAC project. From an ethical perspective, Google has misjudged what it means to have representative views in a broader context. If Mrs. Coles James does not resign, as I hope she does, and if Google does not remove her (https://medium.com/…/googlers-against-transphobia-and-hate-…), as I have personally recommended, the question becomes: what is the right moral stance to take in view of this grave error?
He ended up deciding to stay on the panel, but that was not the kind of ethical debate Google had been hoping to spark — and it became hard to imagine the two working together.
That wasn’t the only problem. I argued a day ago that, outrage aside, the board was not well set up for success. AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.
A role on Google’s AI board was an unpaid, toothless position that cannot possibly, in four meetings over the course of a year, arrive at a clear understanding of everything Google is doing, let alone offer nuanced guidance on it. There are urgent ethical questions about the AI work Google is doing — and no real avenue by which the board could address them satisfactorily. From the start, it was badly designed for the goal.
Now it has been canceled.
Google still needs to figure out AI ethics — just not like this
Many of Google’s AI researchers are active in work to make AI fairer and more transparent, and clumsy missteps by management won’t change that. The Google spokesperson I talked to pointed to several documents purportedly reflecting Google’s approach to AI ethics, from a detailed mission statement outlining kinds of research they will not pursue to a look back, at the start of this year, at whether their AI work so far is producing social good to detailed papers on the state of AI governance.
Ideally, an outside panel would complement that work, increase accountability, and help ensure that every Google AI project is subject to appropriate scrutiny. Even before the outrage, the board wasn’t set up to do that.
Google’s next stab at external accountability will need to solve those issues. A better board might meet more often and have more stakeholders engaged. It would also publicly and transparently make specific recommendations, and Google would tell us whether they’d followed them, and why.
It’s important that Google gets this right. AI capabilities are continuing to advance, leaving most Americans nervous about everything from automation to data privacy to catastrophic accidents with advanced AI systems. Ethics and governance can’t be a sideshow for companies like Google, and they’ll be under intense scrutiny as they try to navigate the challenges they’re creating.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good
HackerNewsBot debug: Calculated post rank: 110 - Loop: 308 - Rank min: 100 - Author rank: 56
To be sure, as a for-profit enterprise with its own unique set of #corporate "ethics", #Facebook has every right to impose whatever #filters it desires on the #media shared on its platform. It is entirely possible that one or more posts was flagged by Facebook's "triggered" readers who merely alerted a #censorship algo which blocked all content.#classy #response from #zerohedge Wish they'd open an instance on D*... #1a
Alternatively, it is just as possible that Facebook simply decided to no longer allow its users to share our content in #retaliation for our extensive coverage of what some have dubbed the platform's "many problems", including #chronic #privacy #violations, mass #abandonment by younger users, its gross and ongoing #misrepresentation of #fake users, ironically - in retrospect - its systematic #censorship and back door #government cooperation (those are just links from the past few weeks).
Unfortunately, as noted above, we still don't know what event precipitated this censorship, and any attempts to get #feedback from the company with the $500 billion market cap, have so far remained #unanswered.
We would welcome this opportunity to engage Facebook in a #constructive dialog over the company's decision to impose a blanket ban on #ZeroHedge content. Alternatively, we will probably not lose much sleep if that fails to occur: unlike other websites, we are lucky in that only a tiny fraction of our inbound traffic originates at Facebook, with most of our readers arriving here directly without the aid of search engines ( #Google banned us from its News platform, for reasons still unknown, shortly after the #Trump victory) or referrals.
That said, with Facebook increasingly under #political, #regulatory and market #scrutiny for its #arbitrary internal decisions on what content to promote and what to snuff, its ever #declining user engagement, and its soaring content #surveillance costs, such censorship is hardly evidence of the #platform's "openness" to discourse, its advocacy of #freespeech, or its willingness to listen to and encourage non-mainstream opinions, even if such "discourse" takes place in some fake user " #clickfarm " somewhere in Calcutta.
Left, black indicates tumor cells; right, one month after treatment tumor is undetectable.