+ Reply to Thread
Page 6 of 6 FirstFirst ... 4 5 6
Results 51 to 58 of 58

Thread: TT: Technology Thread

  1. #51

    By: Michael L. Tan - @inquirerdotnet Philippine Daily Inquirer / 05:09 AM April 04, 2018

    There’s a quiet cultural revolution going on in offices throughout the country involving the very way we think and has involved an audit, not of numbers, but of all kinds of procedures and technologies.

    I thought long and hard about a short title for my column that would describe what this cultural revolution is all about and settled on that one word, “confidential,” which is the way we used to handle matters, mainly around correspondence. For large offices, you would write or stamp “Confidential” on a document and seal it well before passing it on. To some extent that’s still being done today, sometimes with almost ridiculous but ineffective zeal.

    Look at the mail you’re getting at home and you’ll find many are still stapled in a way that you have to remove the staple wire first before opening the envelope or you risk ripping through the document inside, which might be a check, now mangled and could be rejected by the bank when you want to deposit it.

    The other way documents were “secured” was to tape it end to end. Being on the receiving end of so many of these documents, I would tell my staff to call the sender and politely complain about how they have mummified the document, making it practically impossible to open.

    Now comes a law that could either worsen, or rationalize, the way we handle confidential information. The full name of that law would take an entire paragraph so I will just use the short one, the Data Privacy Act of 2012. It took some years to get a bureaucracy going together with implementing rules and regulations, all to ensure the protection of personal information in all of the country’s offices.

    Overseeing this gargantuan task is a National Privacy Commission, which has been pressing hard on institutions to establish data protection offices that will ensure that all sensitive personal information is secured, and not just by stamping “confidential” and stapling or mummifying documents.

    Call centers and data

    The law was actually forced upon us by the business process outsourcing (BPO) industry, which employed 1.3 million Filipinos last year and still counting. Most of them are in the contact subsector, better known as call centers and they have access to all kinds of sensitive information provided not just by Filipinos but residents of countries throughout the world. Remember the last time you wanted to clarify a credit card billing which you think was mistaken and how they asked you about everything from your mother’s maiden name to the last credit transaction you had.

    Think of all the forms you’ve had to fill out over the years. To get some kind of insurance, for example, you may have been asked to submit to a full medical checkup where you were asked all kinds of questions about the causes of death in your family, and your own existing illnesses, surgical procedures … all the way up to your sex life: how many people have you had sex with, were they male or female, and what kind of sex did you have?

    All that information is stored in computer databases.

    Educational institutions have all kinds of sensitive data. I’m thinking of two offices in particular. There’s the health service’s medical records, and there’s the registrar, which still has the “jackets” — enrollment forms and, until recently, grades—of each and every person who has ever enrolled in UP, whether you finished or not. Diliman, being the main campus, has files that go back more than a century. I have a feeling many UP alumni would worry more about their grades, rather than their medical records, being kept private.

    The new law spells out penalties, including imprisonment, for leaking out sensitive personal information. Hold your breath as I mention some of them: “race, ethnic origin, marital status, age, color and religion, philosophical or political affiliation,” “health, education, genetics or sexual life,” “offenses committed or alleged to have been committed,” “social security, current health records, licenses or its denials, tax returns.”

    Each of those items could be further broken down when you think of what “sensitive” can mean. Under this law, we cannot provide the grades of a student to anyone, not even the parents, if the student has reached legal age (18 ) and unless signed consent is given.


    The reason I said all this involves a cultural revolution is that the idea of data privacy is almost nonexistent in our culture. The norm is to share information. Have a crush on someone and want to send a gift on his or her birthday? No problem, everyone knows what office and who in that office can get you that information.

    Birthdays seem relatively benign but think hard about it and you might be opening the doors to unwanted attention.

    Salaries are another example, a source of a lot of resentment within an office, so in UP Diliman we are now banishing the days of open pay slips. Instead, the pay slips come in an encrypted envelope, like the ones banks give you when you get a new ATM card with a password.

    Medical records are particularly sensitive. For years now, as dean and then as chancellor, I’ve had to sign all applications for reimbursement of medical expenses and I’ve always been uncomfortable at how the papers go through several offices and personnel, with all kinds of sensitive information. Culture comes into the picture with people, often with the best of intentions, telling friends, “Did you know Professor xxx has cancer? Maybe we can help raise funds for her.”

    Golden rule

    In other cases, malicious intentions may come in. It’s been so tiring having to respond to anonymous “sumbong” (whistleblowing) made to the government’s hotline, with distorted information that clearly came from access to official files in government agencies, questioning everything from bonuses to an out-of-town conference and often out of a desire for character assassination rather than a concern over corruption. The Filipino term “maiinit na mata” (hot eyes) is an appropriate description of the sources of data breaches.

    “Data breach” is a term that will make it into our vocabularies as the Data Privacy Act is implemented and the breach is fairly easy given that so much information is now stored in computer databases. The Data Privacy Act requires that the databases are secured, requiring layers of passwords and security measures, but even the best of systems will still be heavily challenged by culture, to the point where we may have to require all staff—secretaries, receptionists, even drivers — to sign nondisclosure agreements, meaning they are bound by law not to give out sensitive information, not just from computers and documents but from meetings and even conversations.

    But more than nondisclosure agreements, we will have to look for other ways to tackle cultural attitudes to privacy. It will mean, for example, reminding someone during a conversation that they just named someone and gave away sensitive information about that person. It will mean, too, periodic meetings discussing data privacy and breaches and the foundation of it all, the golden rule: Do not do unto others what you do not want done to you.
    Last edited by Joescoundrel; 04-04-2018 at 08:46 AM.

  2. #52
    Facebook says 87 million may be affected by data breach

    Agence France-Presse / 07:58 AM April 05, 2018

    Facebook said on Wednesday that the personal data of up to 87 million users was improperly shared with British political consultancy Cambridge Analytica, as Mark Zuckerberg defended his leadership at the huge social network.

    Facebook’s estimate was far higher than news reports suggesting 50 million users may have been affected in the privacy scandal, which has roiled the company and sparked questions for the entire internet sector on data protection.

    Zuckerberg told reporters on a conference call that he accepted responsibility for the failure to protect user data but maintained that he was still the best person to lead the network of two billion users.

    “I think life is about learning from the mistakes and figuring out how to move forward,” he said in response to a question on his ability to lead the company.

    “When you’re building something like Facebook, which is unprecedented in the world, there are things that you’re going to mess up… What I think people should hold us accountable for is if we are learning from our mistakes,” he also said.

    Zuckerberg said 87 million was a high estimate of those affected by the breach, based on the maximum number of connections to users who downloaded an academic researcher’s quiz that scooped up personal profiles.

    “I’m quite confident it will not be more than 87 million, it could well be less,” he said.

    To remedy the problem, Zuckerberg said Facebook must “rethink our relationship with people across everything we do” and that it will take a number of years to regain user trust.

    The new estimate came as Facebook unveiled clearer terms of service to enable users to better understand data sharing, and as a congressional panel said Zuckerberg would appear next week to address privacy issues.

    Facebook has been scrambling for weeks in the face of the disclosures on hijacking of private data by the consulting group working for Donald Trump’s 2016 campaign.

    The British firm responded to the Facebook announcement by repeating its claim that it did not use data from the social network in the 2016 election.

    “Cambridge Analytica did not use GSR (Global Science Research) Facebook data or any derivatives of this data in the US presidential election,” the company said in a tweet. “Cambridge Analytica licensed data from GSR for 30 million individuals, not 87 million.”

    Zuckerberg on the Hill

    Meanwhile, Facebook’s chief technology officer Mike Schroepfer said new privacy tools for users of the huge social network would be in place by next Monday.

    “People will also be able to remove apps that they no longer want. As part of this process, we will also tell people if their information may have been improperly shared with Cambridge Analytica,” he said in a statement.

    Schroepfer’s post was the first to cite the figure of 87 million while noting that most of those affected were in the United States (US).

    Facebook also said its new terms of service would provide clearer information on how data is collected and shared without giving the social network additional rights.

    Earlier Wednesday, the House of Representatives’ Energy and Commerce Committee announced what appeared to be the first congressional appearance by Zuckerberg since the scandal broke.

    The April 11 hearing will “be an important opportunity to shed light on critical consumer data privacy issues and help all Americans better understand what happens to their personal information online,” said the committee’s Republican chairman Greg Walden and ranking Democrat Frank Pallone in a statement.

    The Facebook co-founder is also invited to other hearings amid a broad probe on both sides of the Atlantic.

    Deleting Russian ‘trolls’

    Zuckerberg further told reporters in the conference call that he was committed to ensuring that Facebook and its partners do a better job of protecting user data, and that it must take a more serious approach after years of being “idealistic” about how the platform is used.

    “We didn’t take a broad enough view on what our responsibility is, and that was a huge mistake. It was my mistake,” Zuckerberg admitted.

    He said that while “there are billions of people who love the service,” there is also a potential for abuse and manipulation.

    “It’s not enough just to give people a voice,” he also said. “We have to make sure people don’t use that voice to hurt people or spread disinformation.”

    Late Tuesday, Facebook said it deleted dozens of accounts linked to a Russian-sponsored Internet unit, which has been accused of spreading propaganda and other divisive content in the US and elsewhere.

    The social networking giant said it revoked the accounts of 70 Facebook and 65 Instagram accounts, and removed 138 Facebook pages controlled by the Russia-based Internet Research Agency (IRA).

    The agency has been called a “troll farm” due to its deceptive post aimed at sowing discord and propagating misinformation.

    The unit “has repeatedly used complex networks of inauthentic accounts to deceive and manipulate people who use Facebook, including before, during and after the 2016 US presidential elections,” said a statement of Facebook chief security officer Alex Stamos.

  3. #53
    Zuckerberg flubs details of Facebook privacy commitments

    Associated Press / 06:34 AM April 13, 2018

    MENLO PARK, Calif. - Over two days of questioning in Congress, Facebook CEO Mark Zuckerberg chief revealed that he didn’t know key details of a 2011 consent decree with the Federal Trade Commission that requires Facebook to protect user privacy.

    With congressional hearings over and no immediate momentum behind calls for regulation, the biggest hammer still hanging over Facebook in the U.S. is a fresh FTC investigation. The probe follows revelations that pro-Trump data-mining firm Cambridge Analytica acquired data from the profiles of millions of Facebook users. Facebook also faces inquiries in Europe.

    The 2011 agreement bound Facebook to a 20-year privacy commitment, and any violations of that pact could cost Facebook a ton of money, even by its flush-with-cash standards. If Zuckerberg’s testimony before Congress is any indication, the company might have something to worry about.

    Zuckerberg repeatedly assured lawmakers Tuesday and Wednesday that he believed Facebook is in compliance with that 2011 agreement. But he also flubbed simple factual questions about the consent decree.

    “Congresswoman, I don’t remember if we had a financial penalty,” Zuckerberg said under questioning by Colorado Rep. Diana DeGette on Wednesday.

    “You’re the CEO of the company, you entered into a consent decree and you don’t remember if you had a financial penalty?” she asked. She then pointed out that the FTC doesn’t have the authority to issue fines for first-time violations.

    In response to questioning by Rep. Mike Doyle of Pennsylvania, Zuckerberg acknowledged: “I’m not familiar with all of the things the FTC said.”

    Zuckerberg also faced several questions from lawmakers about how long it takes for Facebook to delete user data from its systems. He didn’t know.

    The 2011 consent decree capped years of Facebook privacy mishaps, many of which revolved around its early attempts to follow users and their friends around the web. Any violations of the 2011 agreement could subject Facebook to fines of $41,484 per violation per user per day. To put that in context, Facebook could theoretically owe $8 billion for one single day of a violation affecting all of its American users.

    The current FTC investigation will look at whether Facebook engaged in “unfair acts” that cause “substantial injury” to consumers.

    David Vladeck, a Georgetown University law professor who headed the FTC’s bureau of consumer protection when Facebook signed the deal, said in a blog post this month that Facebook’s argument that it didn’t violate the deal are “far-fetched.” Two days of testimony didn’t change his mind.

    “Most of the reforms Facebook has talked about in the past couple of weeks proposed safeguards that should have been in place years ago,” he said by email on Wednesday following 10 hours of Zuckerberg’s testimony.

    At issue are at least three sections of the decree.

    The first requires that users give “affirmative express consent” any time that data they haven’t made public is shared with a third party. Facebook also has to tell users what data will be shared, who the third parties are, and explicitly note that the sharing exceeds their privacy-setting restrictions.

    “In my view, these requirements were not met,” Vladeck said. Sen. Catherine Cortez Masto of Nevada made a similar point Tuesday, arguing that user consent couldn’t have been buried in their privacy settings.

    “It had to be something very specific. Something very simple,” she said. “And that did not occur. Had that occurred, we wouldn’t be here today talking about Cambridge Analytica.”

    A second portion of the decree forbids Facebook to share user-deleted data with third parties after 30 days, assuming the information was stored on servers under Facebook’s control.

    The third and broadest part binds Facebook to a “comprehensive privacy program” vetted by an outside auditor. The question raised there is whether Facebook acted reasonably and quickly when it found out there may have been breaches of the deal, according to law professor William Kovacic of George Washington University, who served as an FTC commissioner when Facebook was being investigated but left before the settlement was announced.

    Facebook has argued that it only shared data that was made public by users, or permitted by their privacy settings. Zuckerberg said Wednesday that audits it committed to never turned up problems.

    “Was it deliberate? Was it inadvertent? Was there a dramatic lack of care?” Kovacic said. “Everything depends on how they go through this painstaking process of looking at what Facebook promised and what it actually did.”

    The largest fine the FTC has ever imposed in a privacy case was a $22.5 million award in a settlement with Google in 2012. Kovacic said the investigation could take several months. If a case is brought, it would be prosecuted by the Justice Department.

  4. #54
    Facebook in the time of lies, slaughter, and Duterte

    By: Boying Pimentel - @inquirerdotnet US Bureau / 02:38 AM April 10, 2018

    Facebook is our gateway to friends, family, and loved ones. In the age of Duterte, it’s also a swamp, a cesspool of lies, and mean-spiritedness, a weapon used to justify mass slaughter and corruption.

    It’s tempting at times to shut it down.

    As Facebook founder Mark Zuckerberg gets ready for a grilling before the U.S. Congress on how the social network has morphed into a scary, treacherous platform, there is even a call to Facebook users to simply delete their accounts.

    But no, I’m not going to do that.

    For Filipinos, there are ways to make the most of Facebook without being mired in and overwhelmed by the nastiness. There are ways to use Facebook to understand, survive, and even find ways to join the fight against Duterte’s fascism.

    The first step for me was to answer two questions:

    What do I really want to get out of Facebook?

    And what role must Facebook never play in my life?

    The answer to the Question No. 1 is simple: I use Facebook to stay connected and celebrate life with friends and family.

    The answer to Question No. 2 increasingly is becoming clear: Facebook IS NOT, SHOULD NOT, SHOULD NEVER BE my a primary source of news and information about what’s going on in the world.

    Let’s face it: As a source of news and insights, Facebook can be unreliable, chaotic, and dangerous.

    And I’m not just referring to Duterte’s now infamous army of trolls and supporters. We now come across so many lies and half-truths on newsfeeds that I’ve become extremely careful about what I read.

    For example, when I first saw posts blasting Duterte for ordering soldiers to shoot women guerrillas in the vagina, my initial reaction was: “Oh common, he can’t be that stupid.”

    It just seemed like such an extreme, over-the-top thing to say, that I didn’t believe he’d say it. Of course, it turned out to be true. But only after checking with several news sources, including international publications, did that become clear.

    But that’s how I typically react now to many posts or reports that pop up on my Facebook feed. I take my time before believing a shocking or outrageous claim, and that includes posts that fit into how I see the world.

    I also know not all of the roughly 2,000 people in my network share my views. I know many of them disagree with me, and quite a few probably hate my guts because of what I’ve written about Dutertismo.

    Many of them became part of my network because we knew each other casually many years ago or because they grew familiar with my writing and friended me. I can’t say that we really know each other well, but over the past two years, it’s become clear that they fall into two main camps, which is how I’ve organized them on Facebook: those who are against Duterte, and those who support him.

    In a recent article on Why Most Writers Denounce Duterte, the poet and novelist Krip Yuson mentioned his decision not to unfriend Facebook “friends” who support Duterte: “Unlike some emotional FB users, I haven’t dropped Friends on my list who happen to be on the other side. This way, I still get to read their posts, thus allowing for a reckoning of the generally poor thinking that dictates their choice of idols, their biases and prejudices, and the often irrational way they are caught up on the piteous side of national folly (a stronger word than foolishness).”

    I agree. The posts of pro-Duterte people in my network may be cringe-worthy, but they can serve a purpose: they offer insights into the rise of Dutertismo.

    We just have to know how to manage the information flow.

    I did this by essentially using the social media platform the way I use television. I created “channels,” or “friends lists” as they are called on FB, to organize people in my network and the way I access the seemingly endless stream of information.

    This has given me a deeper sense of the political divisions which are complex.

    The pro-Duterte people have many reasons for supporting him. Many of them grew tired of the elitist, bureaucratic, ineffective, and corrupt politics represented by the Aquinos and thought a tough-talking mayor from Davao would be the answer.

    But within their ranks are the extremists. These are the people who have embraced and endorsed the most despicable positions of the Duterte regime.

    They essentially parrot, endorse and/or defend the worst, vilest pronouncements of Duterte and his government.

    They think his jokes about rape and shooting women in the vagina are funny.

    They reject reports of mass killings under this government or even defend the slaughter as a necessary step: “E mga kriminal naman lahat iyan e. They’re all criminals anyway.”

    They dismiss calls for due process as irrelevant and echo Duterte’s view that human rights are unimportant.

    Then there are the anti-Duterte people. Like me, they are vehemently opposed to the killings, are appalled by Duterte’s language, and have grown disgusted with the growing cases of corruption and brutality.

    But even in this camp, there are extremists.

    They take their disgust for Duterte to an irrational level. Many of them are rabidly pro-Aquino and anti-progressive.

    Some appear to take the cues from the mean-spiritedness of the Duterte supporters. They use slut-shaming language to describe Duterte’s women supporters, such as Mocha Uson. They post or share repugnant memes that poke fun at Duterte’s looks or those of his closest allies.

    I spend less than a half hour each week on these channels. Sometimes even less. And I do so mainly to get a quick sense of what’s being discussed and how they are being discussed. Occasionally, I come across content that are worth checking out in depth: a news report, a feature article, a new book, a new academic study, a new documentary.

    But here’s an important point: Facebook may be a good source for leads. But it’s foolish and dangerous to use it as THE main source of news and information.

    The total time I spend browsing on Facebook has been dramatically cut over the past couple of years. And 99% of that time I spend on a channel devoted to family and friends. Yes, we sometimes engage in political discussions, even disputes. But most of the exchanges are personal, respectful. And always fun.

    This is where we talk about our kids, our work, our families.

    This is where we exchange tips on parenting, travel, and household chores.

    This is where we celebrate births and graduations. This is where we express sympathy to friends who’ve lost a loved one.

    And, most important of all, this is where we laugh, where we swap jokes, where we share funny memories, and even, playfully tease one another.

    And this is where my friends and I keep alive and celebrate that cherished Filipino tradition with the occasional post: “Pareng Joey, Pareng Jojo, Pareng Ed, Pareng Rene, kelan sunod na inuman? (When’s the next drinking session?)”

  5. #55
    What Bitcoin Is Really Worth May No Longer Be Such a Mystery

    It’s somewhere between $20 and $800,000, according to economic theory and a night of drinking.

    April 19, 2018, 5:01 PM GMT+8

    It took two economists one three-course meal and two bottles of wine to calculate the fair value of one Bitcoin: $200.

    It took an extra day for them to realize they were one decimal place out: $20, they decided, was the right price for a virtual currency that was worth $1,200 a year ago, flirted with $20,000 in December, and is still around $8,000. Setting aside the fortunes lost on it this year, Bitcoin, by their calculation, is still overvalued, to the tune of about 40,000 percent. The pair named this the Côtes du Rhône theory, after the wine they were drinking.

    “It’s how we get our best ideas. It’s the lubricant,” says Savvas Savouri, a partner at a London hedge fund who shared drinking and thinking duties that night with Richard Jackman, professor emeritus at the London School of Economics. Their quest is one shared by the legions of traders, techies, online scribblers, and gamblers and grifters mesmerized by Bitcoin. What’s the value of a cryptocurrency made of code with no country enforcing it, no central bank controlling it, and few places to spend it? Is it $2, $20,000, or $2 million? Can one try to grasp at rational analysis, or is this just the madness of crowds?

    Answering this question isn’t easy: Buying Bitcoin won’t net you any cash flows, or any ownership of the blockchain technology underpinning it, or really anything much at all beyond the ability to spend or save it. Maybe that’s why Warren Buffett once said the idea that Bitcoin had “huge intrinsic value” was a “joke”—there’s no earnings potential that can be used to estimate its value. But with $2 billion pumped into cryptocurrency hedge funds last year, there’s a lot of money betting the punchline is something other than zero. If Bitcoin is a currency, and currencies have value, surely some kind of stab—even in the dark—should be made at gauging its worth.

    Writing on a tablecloth, Jackman and Savouri turned to the quantity theory of money. Formalized by Irving Fisher in 1911, with origins that go back to Copernicus’s work on the effects of debasing coinage, the theory holds that the price of money is linked to its supply and how often it’s used.

    Here’s how it works. By knowing a money’s total supply, its velocity - the rate at which people use each coin - and the amount of goods and services on which it’s spent, you should be able to calculate price. Estimating Bitcoin’s supply at about 15 million coins (it’s currently a bit more), and assuming each one is used an average of about four times a year, led Jackman and Savouri to calculate that 60 million Bitcoin payments were supporting their assumed $1.2 billion worth of total U.S. dollar-denominated purchases. Using the theory popularized by Fisher and his followers, you can - simplifying things somewhat - divide the $1.2 billion by the 60 million Bitcoin payments to get the price of Bitcoin in dollars. That’s $20.

    So far, so straightforward. It turns out, however, that when it comes to putting a price on Bitcoin, the same equation can yield many different answers. In September, Dan Davies, an analyst at financial research firm Frontline Analysts Ltd., wrote up a “guesstimate” of Bitcoin’s value that he’d originally conducted in 2014 using—again—the quantity theory of money. He plugged in estimates for each variable and got about $600.

    On Dec. 10, Mark Kirker, a high school math teacher in California, published an analysis online using the same equation for the same purpose. He concluded that Bitcoin should be way above then-current levels. He’s since revised the number. Contacted by Bloomberg, he says it could be $15,000.

    How can something be worth $20, $600, and $15,000 within the same theory? One key reason stems from what we don’t know about cryptocurrencies rather than what we do know. We know Bitcoin’s maximum supply is 21 million, and we know the velocity of most commonly used currencies. We don’t know how widely Bitcoin will be adopted tomorrow, how frequently it will transact, or what it will be used for. In Davies’s example, a guide to Bitcoin’s future potential was the illicit drugs market, an obvious home for more-or-less-untraceable digital cash. The United Nations has estimated this market at $120 billion. Plugging in that number helped Davies get to $600.

    For Kirker, though, drugs and criminals are only part of the story. He imagines including the output of some developing countries where cryptocurrencies might have better takeup than traditional banking. But with so much up in the air, the equation starts to look less like algebra and more like alchemy. Even in the non-Bitcoin world, the velocity of money and its price can fluctuate in ways not predicted by fundamental analysis. “I am not wholly surprised it doesn’t pin down a price target to within a factor of 100 either way,” Davies says.

    Some believe the cloud of confusion has to do with the simple fact that cryptocurrency is something entirely new—it needs a fresh school of economic thinking to go with it. A quantity theory of cryptomoney, perhaps.

    John Pfeffer, formerly a partner at KKR & Co., has written several papers to this effect, arguing that technology is turning the centuries-old equation on its head. Bandwidth and computing resources are the fuel of cryptocurrencies, and they need their place in quantity theory, he argues. His version of the equation imagines a world in which more powerful computers and faster connection speeds combine to lower the cost of maintaining a crypto-economy over time, while the same forces radically increase the availability and speed of its digital coins. There already exist hundreds of tokens other than Bitcoin, pointing to a world where digital currencies are, well, a dime a dozen.

    In a future where cryptocurrencies become a form of economic resource (like fuel, water, or electricity) that’s computerized and commoditized, would anyone get rich from hoarding them in her trading account? No, says Pfeffer. In his view, the more widely used a particular brand of digital cash becomes, the higher the probability its value tends toward zero. In quantity theory terms, cryptocoins’ velocity could go way, way up, while the cost of many services within the crypto-economy could go way, way down. Crypto could change the world and still leave a lot of people with worthless tokens.

    Pfeffer dangles one hope in front of the Bitcoin faithful who dream of riches: the possibility there’s one cryptocurrency out there that will serve as a store of value for the digital world. Like gold, a metal seen by investors as a haven in times of crisis or when the purchasing power of cash is eroding, whichever coin wins that crown will have a completely different use—and price—than the rest. Applying this thinking to Bitcoin, Pfeffer explains, would yield a price target of $260,000 to $800,000.

    Such a value would be not too far off $1 million—where the frequently mocked, frizzy-haired self-help guru James Altucher expects Bitcoin to be in 2020. Software entrepreneur John McAfee has said it will hit $500,000. “If not, I will eat my d--- on national television,” he tweeted. He later doubled his target price. Pfeffer has been more careful than most in warning of significant risk of investment loss. “This could all go substantially to zero for various reasons,” he wrote in December.

    Putting a price on Bitcoin is therefore less about crunching numbers and more about deciding just what it is and what it could be, if anything. That’s appetizing for risk-hungry optimists in the venture capital world, who are accustomed to their investments turning into big hits or big flops. Ride-hailing service Uber Technologies Inc., for example, has lost an eye-watering amount of money, yet it’s one of the most highly valued companies in the world. It’s a bet that more traditional investors would have difficulty justifying using traditional metrics.

    But it also means science and snake oil sit side by side. Quantity theory is one example of how an equation can be remodeled to fit different scenarios or different wishes about where the price will land. And it’s not the only one: Network adoption, the cost curve of Bitcoin mining, and transaction volumes have all been bundled into marketable literature advising traders and investors on what to buy. It’s a thick numbers soup. At least Uber has financial accounts to review.

    Those with long memories also remember the quantitative analyses that underpinned the hot new asset classes of the past, from dot-com stocks to securitized art. These were often sold to investors as new metrics and radical investment theses, only to be ditched when a recession or panicked sell-off hit. “They’re always talking about a new paradigm, but I say it’s the same meat, different gravy,” says Côtes du Rhône theorist Savouri, who maintains traditional economic theory should be embraced rather than ignored by the Bitcoin faithful.

    For Savouri, the easiest way to understand the efflorescence of theories and valuations being bandied about is to opt for a simple, overarching one: the greater fool theory. It says that one fool buys in the hope that there’s an ever-bigger sucker willing to pay more. “The problem,” he says, “is that we don’t breed fools geometrically.”

    Lionel Laurent is a reporter for Bloomberg Gadfly.

  6. #56
    For the first time, Facebook spells out what it forbids

    Associated Press / 07:30 PM April 24, 2018

    NEW YORK - If you’ve ever wondered exactly what sorts of things Facebook would like you not to do on its service, you’re in luck. For the first time, the social network is publishing detailed guidelines to what does and doesn’t belong on its service – 27 pages worth of them, in fact.

    So please don’t make credible violent threats or revel in sexual violence; promote terrorism or the poaching of endangered species; attempt to buy marijuana, sell firearms, or list prescription drug prices for sale; post instructions for self-injury; depict minors in a sexual context; or commit multiple homicides at different times or locations.

    Facebook already banned most of these actions on its previous “community standards” page, which sketched out the company’s standards in broad strokes. But on Tuesday it will spell out the sometimes gory details.

    The updated community standards will mirror the rules its 7,600 moderators use to review questionable posts, then decide if they should be pulled off Facebook. And sometimes whether to call in the authorities.

    The standards themselves aren’t changing, but the details reveal some interesting tidbits. Photos of breasts are OK in some cases – such as breastfeeding or in a painting – but not in others.

    The document details what counts as sexual exploitation of adults or minors, but leaves room to ban more forms of abuse, should it arise.

    Since Facebook doesn’t allow serial murders on its service, its new standards even define the term. Anyone who has committed two or more murders over “multiple incidents or locations” qualifies.

    But you’re not banned if you’ve only committed a single homicide. It could have been self-defense, after all.

    Reading through the guidelines gives you an idea of how difficult the jobs of Facebook moderators must be. These are people who have to read and watch objectionable material of every stripe and then make hard calls – deciding, for instance, if a video promotes eating disorders or merely seeks to help people. Or what crosses the line from joke to harassment, from theoretical musing to direct threats, and so on.

    Moderators work in 40 languages. Facebook’s goal is to respond to reports of questionable content within 24 hours. But the company says it doesn’t impose quotas or time limits on the reviewers.

    The company has made some high-profile mistakes over the years. For instance, human rights groups say Facebook has mounted an inadequate response to hate speech and the incitement of violence against Muslim minorities in Myanmar.

    In 2016, Facebook backtracked after removing an iconic 1972 Associated Press photo featuring a screaming, naked girl running from a napalm attack in Vietnam. The company initially insisted it couldn’t create an exception for that particular photograph of a nude child, but soon reversed itself, saying the photo had “global importance.”

    Monica Bickert, Facebook’s head of product policy and counterterrorism, said the detailed public guidelines have been a long time in the works.

    “I have been at this job five years and I wanted to do this that whole time,” she said.

    Bickert said Facebook’s recent privacy travails, which forced CEO Mark Zuckerberg to testify for 10 hours before Congress, didn’t prompt their release now.

    The policy is an evolving document, and Bickert said updates go out to the content reviewers every week. Facebook hopes it will give people clarity if posts or videos they report aren’t taken down. Bickert said one challenge is having the same document guide vastly different “community standards” around the world.

    What passes as acceptable nudity in Norway may not pass in Uganda or the US.

    There are more universal gray areas, too.

    For instance, what exactly counts as political protest? How can you know that the person in a photo agreed to have it posted on Facebook?

    That latter question is the main reason for Facebook’s nudity ban, Bickert said, since it’s “hard to determine consent and age.” Even if the person agreed to be taped or photographed, for example, they may not have agreed to have their naked image posted on social media.

    Facebook uses a combination of the human reviewers and artificial intelligence to weed out content that violates its policies. But its AI tools aren’t close to the point where they could pinpoint subtle differences in context and history – not to mention shadings such as humor and satire – that would let them make judgments as accurate as those of humans.

    And of course, humans make plenty of mistakes themselves.

  7. #57
    The Companies Cleaning the Deepest, Darkest Parts of Social Media

    We spoke to the documentary-makers behind 'The Cleaners,' the film about the people who take down content after you report it.

    This article originally appeared on VICE UK.

    Every minute of every single day, 500 hours of video footage is uploaded to YouTube, 450,000 tweets are tweeted, and a staggering 2.5 million posts are posted to Facebook. We are drowning in content, and within all of that content, there's undoubtedly going to be a chunk deemed as offensive—stuff that's violent, racist, misogynistic, and so on—which gets reported.

    But what happens once you've reported that content? Who takes care of the next steps?

    Directors Moritz Riesewieck and Hans Block have made a documentary exploring exactly that question, and the answer is much more depressing that you might have imagined. The Cleaners got its UK Premiere at Sheffield Doc/Fest in June, so I caught up with the pair shortly after to discuss how social media organizations are cleaning up the internet at the cost of others' lives.

    VICE: Tell me about what drew you to this subject and why you wanted to make a film on it.

    Hans Block: In 2013, a child abuse video went on Facebook and we asked ourselves how this happened because that material is obviously out there in the world but not usually on social media sites. So we began to ask if people were filtering the web or curating what we see. We found out that there were thousands of humans doing the job every day in front of a screen, reviewing what we're supposed to see or not see. We learned that a lot of the work is outsourced to the developing world, and one of the main spots is Manila in the Philippines, and almost nobody knows about it.

    We found out very quickly when trying to contact the workers that it's a very secretive industry, and the company tried to stop the workers from speaking out. The companies use a lot of private policies and screen the accounts of the workers to make sure that nobody is talking with outsiders. They even use code words. Whenever a worker is working for Facebook, they have to say they are working for the "honey badger project." There's a real atmosphere of fear and pressure because there are reprisals for the workers; they have to pay a €10,000 [$11,672] fee if they talk about what they're doing. It's written into their nondisclosure agreements. They even fear they will be put in jail.

    But you managed to track down some workers and ex-workers, and gained their trust for the film. Are these guys moderating all content or just stuff that gets reported?

    Moritz Riesewieck: There are two ways the content is forwarded to the Philippines. The first is a pre-filter, an algorithm, a machine that can analyze the shape of, say, a sexual organ, or the color of blood or certain skin color. So whenever the pre-filter is analyzing and it picks up on something that is inappropriate, the machine will send that content to the Philippines and the content moderators will double check if the machine was right. The second route is when the user flags the content as being inappropriate.

    So this pre-filter algorithm is effectively capable of racial profiling people under the justification of what? Trying to detect terrorists or gangs?

    Hans: We've been trying to work out exactly how this machine is working, and this is one of the big secrets the companies have. We don't know what the machine is trained for in terms of detecting content. There are obvious things like a gun or a naked sexual organ, but some of the moderators told us that skin color is being detected to pick up on things like terrorism, yes.

    What happens when something is deleted? Is it just removed from the user who uploaded it or is it removed universally?

    It is taken down universally. Although, in one case, it's different: child pornography. Whenever a content moderator is reviewing child pornography, they have to escalate this and then report the IP address, the location, and name of user—all the info they have, basically. This gets sent to a private organization in the states and they then analyze all the info and forward it onto the police.

    Are the content moderators adequately trained? Both in understanding the context of what they are reviewing and also in terms of the significance that some of their decisions can have?

    Moritz: I would say this is this biggest scandal about the topic because the workers are very young; they are 18 or 19 and have just left school. [The companies] are recruiting people from the street for these roles. They only request a very low profile of skills, which is basically being able to operate a computer. They are then given three to five days of training, and within that, they have to learn all the guidelines coming from Facebook, Google, YouTube, etc. There are hundreds of examples they have to learn. For example, they have to memorize 37 terror organizations—all their flags, the uniform, the sayings—all in three to five days. They then have to give the guidelines back to the company because they are afraid someone will leak them.

    Another horrible fact is that the workers only have a few seconds to decide. To fulfill the quota of 25,000 images a day, that means they have three to five seconds on each. You're not able to analyze the text of an image or thoroughly make sure you're making the right decision when you have to review so much content. When you click, you then have another ten options to click based on the reason for deletion—nudity, terrorism, self-harm, etc. They then use the labeling of the content moderators to train the algorithm. Facebook is working very hard to train AI to do the job in the future.

    So the workers are providing the training to an algorithm that will eventually take their own job?

    Hans: It won't be possible for AI to do that kind of job because they can analyze what is in the picture, but what is necessary is reading the context, to interpret what you are seeing. If you see someone fighting, it could be a scene from a play or a film. This sort of thing is something a machine will never be capable of.

    Are the workers trained for the severity and trauma of what they are going to see—death, abuse, child pornography, etc?

    Moritz: They are not trained for that. There is one moderator in the film who, only on her first day, after training and signing the contract, properly realized what she was doing there. She ended up reviewing child pornography on her first day and said to the team leader she was unable to do this, and the response was, "You've signed the contract. It's your job to do that." There's no psychological preparation for them to do their work. They have a quarterly session where the whole team is brought into one room and a psychologist asks, "Does anyone have a problem?" Of course, everyone is looking at the ground and afraid to talk about their problems because they are afraid to lose their job. It's for a good reason that Facebook is outsourcing work to the Philippines because there's such a big social pressure attached: The salary is not just their own—it's often for their whole family, of up to eight to ten people. It's not easy to leave the job.

  8. #58
    ^^^ (Continued)

    What's the scale of the operation out there?

    We can't know the exact amount, but we think that, for Facebook alone, it's around 10,000 people. If you then add in all the other companies that are outsourcing this work to the Philippines, it's around 100,000. It's a big, big industry.

    Bias is something that is explored interestingly in the film. You have a content moderator who describes herself as a "sin preventer" and is very religious, while another person is very pro-President Rodrigo Duterte and his violent anti-crime and drugs stance. Are they imparting their personal views, politics, and ethics onto something that should be objective?

    Hans: Whenever Facebook is asked a question about the guidelines, they try to promote that the guidelines are objective and that they can be executed by everyone. That's not true, and that's what the film is about. It's very important what kind of cultural background you have. There are so many areas in the guidelines that require you to interpret and use your gut feeling to decide about what you see.

    Religion is a big part of the Philippines; Catholicism is very strong. The idea of sacrifice is crucial in their culture. The idea is that sacrificing yourself to get rid of the sins of the world will make it a better place. So, for a lot of the workers, it is viewed as a religious mission. They use religion to give the job meaning, and that helps them do it a bit longer, as when you're traumatized through work you need to find meaning. On a political level, Duterte is very strong there and people believe in what he is doing. Almost all the content moderators we spoke to were really proud he won the election. Some of the people see this work as an extension of his work, so they will just delete what they don't like in line with the country's political views.

    All eyes are on Facebook at the moment post-Cambridge Analytica. How do you think things are going to pan out in the area of content moderation?

    Moritz: Zuckerberg, whenever he is asked in a testimony, says he will hire another 20,000 people across the network on content safety. But this is not the solution. It's not about the number of workers—you can hire another 20,000 low-wage Filipino workers doing the job and it won't fix any problem with online censorship. What he needs to do is to hire really well-trained journalists. Facebook is not just a place to share vacation pictures or invite someone to your birthday anymore; it's the most major communication infrastructure in the world. More and more people are using Facebook to inform themselves, so it's very important who is deciding what is published. Think about if someone else decided what was published on the Guardian's front page tomorrow—that would be a disaster and it would be a scandal, but this is the status quo of the social media companies right now. This has to change, but this costs money, and the only goal of the company is to gain more money... that's why they hire low-wage workers in the Philippines.

    Some of the moderation requests, such as terrorism, are even coming directly from the US government, right?

    Absolutely. The list of terrorist organizations that have to be banned on Facebook comes from Homeland Security, but obviously, in different parts of the world, we have very different ideas of who is a terrorist and who is a freedom fighter. This is a lie that Facebook has always stated; that they are a neutral platform, and that they are just a technical tool for the user. It's not true; they are taking editorial decisions every day.

    How did you find the people who had left this job were getting on? Are they forced into a silence or tracked or anything?

    Hans: When we were researching, there was one moment when the [content moderation] company was taking photos of us and they were then distributed to everyone in the company—even the former workers—with a warning that talking to us will lead to them being in trouble. There's a really big atmosphere of fear that the company is spreading. One employee contacted us via Facebook, and he was really angry with us and telling us to leave the city, or something will happen to us. So even the former workers are still scared of speaking after leaving. We had lawyers on our team to protect them, however; we knew what was written into their contracts and what we could say in our film.

    One of the greatest tragedies captured in the film is that it tells the story of a worker who died by suicide. As far as you're aware, was that an anomaly or is that happening on a larger scale and not being reported?

    Moritz: It is a wider thing happening in the company. The suicide rate is very high. Whenever we spoke with content moderators, almost everyone knows about a case where someone committed suicide because of the work. It's a problem in the industry. That's why it was important to include that in our film. They need to hire proper psychologists to protect them. The man in question who took his own life worked in reviewing self-harm content and had asked to be removed from doing that several times.

    Are these social media companies aware that people are killing themselves over this work, do you think?

    Hans: Good question. Yes, I think they do know because we made the film, but this is also a problem with outsourcing; it's so easy for Facebook to say, "We don't know about that because it's not our company and we are not responsible because we don't hire them and we're not responsible for the working conditions." This is the price we, and Facebook, are paying for cleaning social media—people killing themselves. We have to pressure these companies as loudly as we can.

+ Reply to Thread
Page 6 of 6 FirstFirst ... 4 5 6

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts

Visitor count:
Copyright © 2005 - 2013.