. | . |
Google worker rebellion against military project grows by Staff Writers San Francisco (AFP) May 16, 2018
An internal petition calling for Google to stay out of "the business of war" was gaining support Tuesday, with some workers reportedly quitting to protest a collaboration with the US military. About 4,000 Google employees were said to have signed a petition that began circulating about three months ago urging the internet giant to refrain from using artificial intelligence to make US military drones better at recognizing what they are monitoring. Tech news website Gizmodo reported this week that about a dozen Google employees are quitting in an ethical stand. The California-based company did not immediately respond to inquiries about what was referred to as Project Maven, which reportedly uses machine learning and engineering talent to distinguish people and objects in drone videos for the Defense Department. "We believe that Google should not be in the business of war," the petition reads, according to copies posted online. "Therefore, we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology." - 'Step away' from killer drones - The Electronic Frontier Foundation, an internet rights group, and the International Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support. While reports indicated that artificial intelligence findings would be reviewed by human analysts, the technology could pave the way for automated targeting systems on armed drones, ICRAC reasoned in an open letter of support to Google employees against the project. "As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems," ICRAC said in the letter. "We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control." Google has gone on the record saying that its work to improve machines' ability to recognize objects is not for offensive uses, but published documents show a "murkier" picture, the EFF's Cindy Cohn and Peter Eckersley said in an online post last month. "If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects," said Cohn and Eckersley. "Those are hefty ethical stakes, even with humans in the loop further along the 'kill chain.'" The EFF and others welcomed internal Google debate, stressing the need for moral and ethical frameworks regarding the use of artificial intelligence in weaponry. "The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety," Cohn and Eckersley said. "Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behavior from the military agencies that seek their expertise -- and from themselves."
Twitter tweak steps up fight against trolls The new approach looks at behavioral patterns of users in addition to the content of the tweets, allowing Twitter to find and mute online bullies and trolls. Even if the offending tweets are not a violation of Twitter policy, they may be hidden from users if they are deemed to "distort" the conversation, Twitter said. The announcement is the latest "safety" initiative by Twitter, which is seeking to filter out offensive speech while remaining an open platform. Twitter already uses artificial intelligence and machine learning in this effort but the latest initiative aims to do more by focusing on the actions of certain users in addition to the content. "Our ultimate goal is to encourage more free and open conversation," chief executive Jack Dorsey said. "To do that we need to significantly reduce the ability to game and skew our systems. Looking at behavior, not content, is the best way to do that." A Twitter blog post said the move aims at "troll-like behavior" which targets certain users and tweets with derisive responses. "Some troll-like behavior is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter," said the blog from Twitter executives Del Harvey and David Gasca. "Some of these accounts and tweets violate our policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation." Harvey and Gasca said the challenge has been to address "disruptive behaviors that do not violate our policies but negatively impact the health of the conversation." The new approach does not wait for people who use Twitter to report potential issues. "There are many new signals we're taking in, most of which are not visible externally," the blog post said. "Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack." In some cases, if the content is not a violation of Twitter policies, it will not be deleted but only shown when a user clicks on "show more replies." "The result is that people contributing to the healthy conversation will be more visible in conversations and search," Harvey and Gasca wrote. Twitter said its tests of this approach shows a four percent drop in abuse reports from search and eight percent fewer abuse reports from conversations.
Facebook shut 583 million fake accounts Paris (AFP) May 15, 2018 Facebook axed 583 million fake accounts in the first three months of 2018, the social media giant said Tuesday, detailing how it enforces "community standards" against sexual or violent images, terrorist propaganda or hate speech. Responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook said those closures came on top of blocking millions of attempts to create fake accounts every day. Despite this, the group said fake profiles still make up 3-4 percent of ... read more
|
|
The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |