In celebration of the New York Academy of Medicine’s #ColorOurCollections campaign this week, many museums, libraries, and archives hopped on the adult coloring bandwagon and created coloring books to share on Twitter. We’ve been participating by posting various images throughout the week for people to color, from Rosie the Riveter to the Faulkner murals.
Now we have a coloring book as well! We’ve chosen some of our favorite patents from our holdings for you to color:
Or, browse our online catalog for more fascinating patents to color!
Share your coloring creations with us on Twitter using the hashtag #ColorOurCollections.
The first couple weeks of this, I was mostly just trying to get back my forgotten vocabulary, and remember all the very common hanzi that look totally different when they're traditional characters instead of simplified, but I realized even just while watching cheesy Taiwanese dramas that something clicked inside my head -- it was the difference between not being able to understand what I was hearing, even with English subtitles, even when I knew all the words, and being able to understand at least some of it, at least a little bit.
Which is tremendously encouraging when you feel like you've been at a plateau for a long time, and felt that none of the formal studying was actually translating into a better understanding of the language.
I will have to make a post soon about the New Improved Vocabulary System that I've implemented -- it's a bit high-tech (well, it's done entirely using Excel and VLOOKUP, so not THAT high-tech) and solves some of the problems I had been having of just making flash cards for every new word I encountered regardless of rarity.
Here at Slack we are working to ensure that diversity and inclusion are fundamental components of our organization. Like many other companies, we are reevaluating and revising our recruiting practices. More broadly, we are trying to change the culture that can make Silicon Valley feel like an unwelcome place for many people. Part of transforming that culture includes accountability and transparency.
We recently reported our diversity and inclusion data in September 2015. Why are we reporting it again now? The answer is simple: we got a lot bigger very quickly, we changed our survey methodology to allow for greater inclusivity, and we have seen a shift in our diversity data since last time.
What We Found
Some notable data points:
- In our September 2015 blog post we reported about women managers at Slack. Today, 43% of our managers identify as women, and 40% of our people are managed by women, down slightly from our last report.
- The black engineering population at Slack has grown to 8.9% of our overall US engineering organization (and over 7.8% globally compared to just under 7% globally in our last report).
- In December 2015, 6.9% of our US technical employees and 4.4% of our total US employee population identified as black. Technical roles are self-reported but include product, design, engineering, QA and technical account managers.
- The Hispanic/Latino(a) population was either negligible or obscured by having a broad multiracial category in our first survey. In this most recent survey, 5.6% of our US engineering organization reported themselves to be Hispanic/Latino(a), along with 6.1% of all US technical employees.
- In our September blog post, 18% of people in our global engineering organization identified as women. That number has risen to 24% today, 26% in the US. Across all departments globally, women are currently 43% of our workforce, up from 39% in our September post.
- Often not reported among tech companies is the intersection of race and gender. Looking at women within underrepresented people of color (Native, Black, Hispanic/Latina, also frequently referred to as underrepresented minorities or “URMs”) we found that 9% of our engineering organization in the US report in these categories.
- Our LGBTQ population has grown from 10% of Slack’s global workforce in June to 13% of our global population in December.
Slack US by Race & Ethnicity
Slack Global by Race & Ethnicity
Female-identified Employees at Slack
Notes on Methodology
We did our first diversity and inclusion survey in mid-year 2015. At that time, Slack had just over 170 employees. By early December we had over 290 people. Today, just over a month into 2016, we have nearly 370 employees worldwide. That rapid growth was a key reason we re-ran the survey. Some additional context that is helpful in evaluating our results:
- Our diversity surveys are voluntarily self-reported data from employees. Surveys are anonymous. None of the data we collect about race/ethnicity, gender identity, or sexual orientation in these surveys is tied to data in our HRIS system. Some data pertaining to gender identity comes from HRIS reporting.
- The June 2015 survey only allowed for single-select on race/ethnicity, due to the limitations of our survey tool. Employees were given the option to select “multiracial” or “other,” and indicate how they would describe their racial and ethnic background. We received very clear employee feedback that a multiselect option was preferable to “other” so we made that change for our December 2015 survey.
- In both surveys, we did not get to a full 100% participation rate. However, we did get over 90% both times. Note that being short of 100% leaves a margin of error in the results, which could sway the numbers in either direction. All results are therefore approximate.
- The June 2015 report was based on global data. The December 2015 survey population was larger, allowing us to report US and global data separately. In addition to our headquarters in San Francisco, Slack has offices in Vancouver, Canada, Dublin, Ireland, and Melbourne, Australia.
This is an Ongoing Evolution
Because we are still small as companies go, every person we hire and every person who leaves can make a dramatic difference to our diversity data. For the most part, we appear to be harnessing our net growth in a positive way, but that could change. Some changes in our results could also be partly explained by the changes in survey design between the two surveys that we ran.
We recognize that we still have a long way to go. For example, while there are women leaders in our engineering and technical organizations, there are still no leadership positions in engineering, product or design held by URMs. This is a glaring omission for a company where 13% of the global engineering organization reports as URMs. One way we are starting to address this gap is by introducing the Rooney Rule into our recruiting process as we hire for more senior-level leadership roles. We also recognize that we do not yet have a woman or person of color from an underrepresented group on our board of directors. When we begin to add outside directors, addressing this will be an important priority.
All in all, our takeaway from this is that talking about diversity and inclusion keeps the issue front of mind for ourselves and our people. So we are going to keep talking about it. Of course, talk is not enough. We will continue to regularly report on our status so that we can be held accountable, and we will continue to look for ways in which we can improve.
(source: carrot666 by way of kaberabbits)
Now that I've got your attention: my friend Erica is raising money for much-needed trauma therapy and could use your help. I've known her IRL for ten years and can vouch for her as much as I can for anyone in the world; she's a real person and the money will go to do what it says on the tin. Erica is someone who's supported me in a myriad of ways, and I'm not the only one, so if you help her, you'll be helping me.
If you have a couple bucks to spare: do it to support an intersectional social justice writer, do it to support a disabled queer trans woman of color, do it to redistribute wealth, or just do it because that would make me happy. Here's the link to her fundraiser. I reserve the right to keep nagging you all until she meets her goal.
4:30PM: ok, so my sister volunteered to shepherd me @the caucus. figure i'll set an alarm for 6, put some eye drops in so i can stand the florescents, take a muscle relaxer, and pretend i can do this!
5:30 PM have realized i need to shower. everything now feels insurmountable
6:30 PM Have showered. Get to walk to polling site. Curse democracy in general and everyone in particular.
6:50PM Be one of the last people to show up, yet still have to stand in line for 20 minutes, only to ...
6:51PM Run into the girl my boyfriend is also dating.
6:54PM Duck into bathroom and send boyfriend selfie of D: face.
7:10PM Finish registering.
7:15PM Commandeer bench, commence pretending to ignore everyone and reading fanfic in desperate attempt to actually ignore everyone.
7:20PM Realize legs are pissed off at me and I still have to walk home before I can take more meds that would help the pain. Yell AYE a few times to vote for people to do stuff.
7:25PM Be suddenly divided into a section where I'm expected to stand for possibly 2 hours.
Idealist on Shoulder: Shame on you for being so surprised that someone we elected actually took immediate appropriate action with the power of their office!
Realist on Other Shoulder: Breathe deeply and release this moment, for it will never happen again.
7:30PM Commandeer chair.
7:36PM Want to leave.
7:37PM Want to leave.
7:38PM Really want to leave.
7:40PM Sister asks "how are you holding up?" I say "Great!" She says "are you sure? You know you have trouble with spaces you can't leave!"
NO, I FORGOT, THANKS FOR REMINDING ME OMFG.
7:41PM Sister attempts to introduce me to friend. I can't even right now. Am rude by accident probably on purpose.
7:42PM Want to leave.
7:43PM Concentrating all will on NOT having panic attack.
7:43PM Really want to leave.
7:45PM Can't stand this anymore; leave suddenly. Immediately feel a thousand times better but still vomit on the way home.
Find out an hour later when sister and mother get home that my vote was processed appropriately, whatever the fuck that means. FREEDOM.
Harassment as ExternalityIn part 3, I argued that online harassment is not an accident: it's something that service providers enable because it's profitable for them to let it happen. To know how to change that, we have to follow the money. There will be no reason to stop abuse online as long as advertisers are the customers of the services we rely on. To enter into a contract with a service you use and expect that the service provider will uphold their end of it, you have to be their customer, not their product. As their product, you have no more standing to enter into such a contract than do the underground cables that transmit content.
Harassment, then, is good for business -- at least as long as advertisers are customers and end users are raw material. If we want to change that, we'll need a radical change to the business models of most Internet companies, not shallow policy changes.
Deceptive AdvertisingWhy is false advertising something we broadly disapprove of -- something that's, in fact, illegal -- but spreading false information in order to entice more eyeballs to view advertisements isn't? Why is it illegal to run a TV ad that says "This toy will run without electricity or batteries," but not illegal for a social media site to surface the message, "Alice is a slut, and while we've got your attention, buy this toy?" In either case, it's lying in order to sell something.
Advertising will affect decision-making by Internet companies as long as advertising continues to be their primary revenue source. If you don't believe in the Easter Bunny, you shouldn't believe it either when executives tell you that ad money is a big bag of cash that Santa Claus delivers with no strings attached. Advertising incentivize ad-funded media to do whatever gets the most attention, regardless of truth. The choice to do what gets the most attention has ethical and political significance, because achieving that goal comes at the expense of other values.
Should spreading false information have a cost? Should dumping toxic waste have a cost? They both cost money and time to clean up. CDA 230 protects sites that profit from user-generated content from liability from paying any of the costs of that content, and maybe it's time to rethink that. A search engine is not like a common carrier -- one of the differences is that it allows one-to-many communication. There's a difference between building a phone system that any one person can use to call anyone else, and setting up an autodialer that lets the lucky 5th callee record a new message for it.
Accountability and Excuses
"Code is never neutral; it can inhibit and enhance certain kinds of speech over others. Where code fails, moderation has to step in."Have you ever gone to the DMV or called your health insurance company and been told "The computer is down" when, you suspected, the computer was working fine and it just wasn't in somebody's interest to help you right now? "It's just an algorithm" is "the computer is down," writ large. It's a great excuse for failure to do the work of making sure your tools don't reproduce the same oppressive patterns that characterize the underlying society in which those tools were built. And they will reproduce those patterns as long as you don't actively do the work of making sure they don't. Defamation and harassment disproportionately affect the most marginalized people, because those are exactly the people that you can bully with few or no consequences. Make it easier to harass people, to spread lies about them, and you are making it easier for people to perpetuate sexism and racism.
-- Sarah Jeong, The Internet of Garbage
There are a number of tools that technical workers can use to help mitigate the tendency of the communities and the tools that they build to reproduce social inequality present in the world. Codes of conduct are one tool for reducing the tendency of subcultures to reproduce inequality that exists in their parent culture. For algorithms, human oversight could do the same -- people could regularly review search engine results in a way that includes verifying factual claims that are likely to have a negative impact on a person's life if the claims aren't true. It's also possible to imagine designing heuristics that address the credibility of a source rather than just its popularity. But all of this requires work, and it's not going to happen unless tech companies have an incentive to do that work.
A service-level agreement (SLA) is a contract between the provider and a service and the services' users that outlines what the users are entitled to expect from the service in exchange for their payment. Because people pay for most Web services with their attention (to ads) rather than with money, we don't usually think about SLAs for information quality. For an SLA to work, we would probably have to shift from an ad-based model to a subscription-based model for more services. We can measure how much money you spend on a service -- we can't measure how much attention you provide to its advertisers. So attention is a shaky basis on which to found a contract. Assuming business models where users pay in a more direct and transparent way for the services they consume, could we have SLAs for factual accuracy? Could we have an SLA for how many death threats or rape threats it's acceptable for a service to transmit?
I want to emphasize one more time that this article isn't about public shaming. The conversation that uses the words "public shaming" is about priorities, rather than truth. Some people want to be able to say what they feel like saying and get upset when others challenge them on it rather than politely ignoring it. When I talk about victims of defamation, that's not who I'm talking about -- I'm talking about people against whom attackers have weaponized online media in order to spread outright lies about them.
People who operate search engines already have search quality metrics. Could one of them be truth -- especially when it comes to queries that impinge on actual humans' reputations? Wikipedia has learned this lesson: its policy on biographies of living persons (BLP) didn't exist from the site's inception, but arose as a result of a series of cases in which people acting in bad faith used Wikipedia to libel people they didn't like. Wikipedia learned that if you let anybody edit an article, there are legal risks; the risks were (and continue to be) especially real for Wikipedia due to how highly many search engines rank it. To some extent, content providers have been able to protect themselves from those risks using CDA 230, but sitting back while people use your site to commit libel is still a bad look... at least if the targets are famous enough for anyone to care about them.
Code is LawMaking the Internet more accountable matters because, in the words of Lawrence Lessig, code is law. Increasingly, software automates decisions that affect our lives. Imagine if you had to obey laws, but weren't allowed to read their text. That's the situation we're in with code.
We recognize that the passenger in a hypothetical self-driving car programmed to run over anything in its path has made a choice: they turned the key to start the machine, even if from then on, they delegated responsibility to an algorithm. We correctly recognize the need for legal liability in this situation: otherwise, you could circumvent laws against murder by writing a program to commit murder instead of doing it yourself. Somehow, when physical objects are involved it's easier to understand that the person who turns the key, who deploys the code, has responsibility. It stops being "just the Internet" when the algorithms you designed and deployed start to determine what someone's potential employers think of them, regardless of truth.
There are no neutral algorithms. An algorithmic blank slate will inevitably reproduce the violence of the social structures in which it is embedded. Software designers have the choice of trying to design counterbalances to structural violence into their code, or to build tools that will amplify structural violence and inequality. There is no neutral choice; all technology is political. People who say they're apolitical just mean their political interests align well with the status quo.
Recommendation engines like YouTube, or any other search engine with relevance metrics and/or a recommendation system, just recognize patterns -- right? They don't create sexism; if they recommend sexist videos to people who aren't explicitly searching for them, that's because sexist videos are popular, right? YouTube isn't to blame for sexism, right?
Well... not exactly. An algorithm that recognizes patterns will recognize oppressive patterns, like the determination that some people have to silence women, discredit them, and pollute their agencies. Not only will it recognize those patterns, it will reproduce those patterns by helping people who want to silence women spread their message, which has a self-reinforcing effect: the more the algorithm recommends the content, the more people will view it, which reinforces the original recommendation. As Sarah Jeong wrote in The Internet of Garbage, "The Internet is presently siloed off into several major public platforms" -- public platforms that are privately owned. The people who own each silo own so many computing resources that competing with them would be infeasible for all but a very few -- thus, the free market will never solve this problem.
Companies like Google say they don't want to "be evil", but intending to "not be evil" is not enough. Google has an enormous amount of power, and little to no accountability -- no one who manages this public resource was elected democratically. There's no process for checking the power they have to neglect and ignore the ways in which their software participates in reproducing inequality. This happened by accident: a public good (the tools that make the Internet a useful source of knowledge) has fallen under private control. This would be a good time for breaking up a monopoly.
Persistent IdentitiesIn the absence of anti-monopoly enforcement, is there anything we can do? I think there is. Anil Dash has written about persistent pseudonyms, a way to make it possible to communicate anonymously online while still standing to lose something of value if you abuse that privilege in order to spread false information. The Web site Metafilter charges a small amount of money to create an account, in order to discourage sockpuppeting (the practice of responding to being banned from a Web site by coming back to create a new account) -- it turns out this approach is very effective, since people who are engaging in harassment for laughs don't seem to value their own laughs very highly in terms of money.
I think advertising-based funding is also behind the reason why more sites don't implement persistent pseudonyms. The advertising-based business model encourages service providers to make it easy as possible for people to use their service; requiring the creation of an identity would put an obstacle in the way of immediate engagement. This is good from the perspective of nurturing quality content, but bad from the perspective that it limits the number of eyeballs that will be focused on ads. And thus, we see another way in which advertising enables harassment.
Again, this isn't a treatise against anonymity. None of what I'm saying implies you can't have 16 different identities for all the communities you participate in online. I am saying that I want it to be harder for you to use one of those identities for defamation without facing consequences.
A note on diversityTwitter, Facebook, Google, and other social media and search companies are notoriously homogeneous, at least when it comes to their engineering staff and their executives, along gendered and racial lines. But what's funny is that Twitter, Facebook, and other sites that make money by using user-generated content to attract an audience for advertisements, are happy to use the free labor that a diversity of people do for them when they create content (that is, write tweets or status updates). The leaders of these companies recognize that they couldn't possibly hire a collection of writers who would generate better content than the masses do -- and anyway, even if they could, writers usually want to be paid. So they recognize the value of diversity and are happy to reap its benefits. They're not so enthusiastic to hire a diverse range of people, since that would mean sharing profits with people who aren't like themselves.
And so here's a reason why diversity means something. People who build complex information systems based on approximations and heuristics have failed to incorporate credibility into their designs. Almost uniformly, they design algorithms that will promote whatever content gets the most attention, regardless of its accuracy. Why would they do otherwise? Telling the truth doesn't attract an audience for advertisers. On the other hand, there is a limit to how much harm an online service can do before the people whose attention they're trying to sell -- their users -- get annoyed and start to leave. We're seeing that happen with Twitter already. If Twitter's engineers and product designers had included more people in demographics that are vulnerable to attacks on their credibility (starting with women, non-binary people, and men of color), then they'd have a more sustainable business, even if it would be less profitable in the short term. Excluding people on the basis of race and gender hurts everyone: it results in technical decisions that cause demonstrable harm, as well as alienating people who might otherwise keep using a service and keep providing attention to sell to advertisers.
Internalizing the ExternalitiesIn the same way that companies that pollute the environment profit by externalizing the costs of their actions (they get to enjoy all the profit, but the external world -- the government and taxpayers -- get saddled with the responsibility of cleaning up the mess), Internet companies get to profit by externalizing the cost of transmitting bad-faith speech. Their profits are higher because no one expects them to spend time incorporating human oversight into pattern recognition. The people who actually generate bad-faith speech get to externalize the costs of their speech as well. It's the victims who pay.
We can't stop people from harassing or abusing others, or from lying. But we can make it harder for them to do it consequence-free. Let's not let the perfect be the enemy of the good. Analogously, codes of conduct don't prevent bad actions -- rather, they give people assurance that justice will be done and harmful actions will have consequences. Creating a link between actions and consequences is what justice is about; it's not about creating dark corners and looking the other way as bullies arrive to beat people up in those corners.
...the unique force-multiplying effects of the Internet are underestimated. There’s a difference between info buried in small font in a dense book of which only a few thousand copies exist in a relatively small geographic location versus blasting this data out online where anyone with a net connection anywhere in the world can access it.When we protect content providers from liability for the content that they have this force-multiplying effect on, our priorities are misplaced. With power comes responsibility; currently, content providers have enormous power to boost some signals while dampening others, and the fact that these decisions are often automated and always motivated by profit rather than pure ideology doesn't reduce the need to balance that power with accountability.
-- Katherine Cross, "'Things Have Happened In The Past Week': On Doxing, Swatting, And 8chan":
"The technical architecture of online platforms... should be designed to dampen harassing behavior, while shielding targets from harassing content. It means creating technical friction in orchestrating a sustained campaign on a platform, or engaging in sustained hounding."That our existing platforms neither dampen nor shield isn't an accident -- dampening harassing behavior would limit the audience for the advertisements that can be attached to the products of that harassing behavior. Indeed, they don't just fail to dampen, they do the opposite: they amplify the signals of harassment. At the point where an algorithm starts to give a pattern a life of its own -- starts to strengthen a signal rather than merely repeating it -- it's time to assign more responsibility to companies that trade in user-generated content than we traditionally have. To build a recommendation system that suggests particular videos are worth watching is different from building a database that lets people upload videos and hand URLs for those videos off to their friends. Recommendation systems, automated or not, create value judgments. And the value judgments they surface have an irrevocable effect on the world. Helping content get more eyeballs is an active process, whether or not it's implemented by algorithms people see as passive.
-- Sarah Jeong, The Internet of Garbage
There is no hope of addressing the problem of harassment as long as it continues to be an externality for the businesses that profit from enabling it. Whether by supporting subscription-based services with our money and declining to give our attention to advertising-based surfaces, or expanding legal liability for the signals that a service selectively amplifies, or by normalizing the use of persistent pseudonyms, people will continue to have their lives limited by Internet defamation campaigns as long as media companies can profit from such campaigns without paying their costs.
Do you like this post? Support me on Patreon and help me write more like it.
What Can Zapier Do to Help My Team?
Zapier is a service that allows hundreds of different apps to connect to Slack, in new and interesting ways. Using a system of triggers and actions, you can build powerful custom recipes (they call them “Zaps”) that pass data from an application to your Slack team and vice-versa. In practice, it only takes a few clicks to get specific information from an app right into your Slack team.
This week, Zapier announced something big: Multi-Step Zaps. Multi-Step Zaps allow you to daisy-chain as many actions as you want in a single recipe, making it easier to allow multiple apps and Slack to do more together.
Think of it like this: say you set up a Slack #volunteer-day channel and asked people to post the times they were willing to volunteer on a specific day. With a single recipe chain, Zapier could copy the usernames and messages posted to the channel, add each as a new line in a Volunteer Day spreadsheet at Google Docs, and send a direct message in Slack to the volunteer coordinator telling her who posted and what they said that got automatically added to the volunteer list sheet.
Previously, the Zapier team highlighted a long list of Slack integrations on their blog, so we thought we’d share a few of our favorites today, along with ways the new multi-step zaps allow you to extend them. Each should take you no more than a few minutes to set up.
Boost Your Marketing Efforts
Effective marketers always keep an eye on their social channels. Slack’s Twitter integration is a good place to start, but with Zapier, you can also start getting alerts for your company’s new WordPress posts, blog comments or mentions around the web. With a Multi-Step Zap, you could also track those mentions in a spreadsheet, save the link to your Pocket account, and tell Buffer to queue up a Tweet, all in one fell swoop.
The Product Hunt team, for example, relies on Zapier and brand monitoring tool Mention to get notifications in their #interwebz channel for “Product Hunt” mentions.
Turn Messages into Tasks
In a category all by itself: simply star a Slack message, and Zapier will kick off an action in another app. This is helpful for managing new tasks. It’s pretty easy to move starred messages into Trello instantly.
You’re not limited to one task list, either: Multi-Step Zaps can port the information from one starred message to as many tools as you want. That way, if you’re using Wunderlist to manage tasks and your team relies on Basecamp, your data makes it to both apps safe-and-sound.
When you sell products online, customer experience is key, and there’s nothing stopping your team from getting to know every new customer. Zapier can help: Every new Stripe payment or Shopify sale can post into a #sales channel for your organization, keeping your whole team in the loop.
One Multi-Step Zap can take a new Stripe customer, subscribe them to your MailChimp list, add an entry to a Pipedrive CRM, then give your #newcustomers team a heads up in Slack. Pretty handy, huh?
Motivate Your Event Team
No matter the size, planning for an event quickly turns into a major undertaking. Along the way though, seeing attendees sign up motivates you to continue to the finish line. That’s where Zapier comes in again, helping you stay on top of the latest Eventbrite registrations or Meetup RSVPs.
You can also save on follow-up time by automating your event registration emails. Add a step to your Zap to connect a service like Gmail or Mandrill to fire off personalized welcome messages to new attendees with all the conference logistics they need, automatically.
Getting Started with Slack and Zapier
To get started, sign up for a free Zapier account and click “Make a New Zap.” Once you do, you can use Slack as a"Trigger" or “Action” in your workflow. Triggers, like new Pipedrive activity, start an automation chain, and actions, like new Slack messages, push new data to your account.
In each step, you’ll authenticate an app with Zapier, then pick out live data to use in your recipe. Throughout the process, you’ll have the chance to test your setup and confirm that everything is running smoothly.
If you want to add another action to your chain, hit the + button at the bottom of the workflow (or, the + button between steps) and tweak accordingly.
It’s Zappening (sorry)
Beyond the ideas covered here, Zapier offers hundreds of Slack integration possibilities on its site. Among them are integrations you’ll find nowhere else, such as the free Zapier Email Parser which allows you to parse out important bits from incoming emails that you can then act upon.
Don’t forget Zapier is an app that works in the background—and once you set up a Zap, you no longer need to think about it—it’s a true set-it-and-forget-it experience. With myriads of integrations to choose from, you don’t need to wonder: Can I integrate this app with Slack?
See minutes online for a more detailed record of the discussions. (The headers below link into the relevant sections of the minutes.)
Karen Myers and Nick Ruffilo reported on outreach plans. One area is to do a summary for the work we’re doing—and providing them to reporters and let them come to us for more information, so we’re not guessing or doing speculative work. We still want people to volunteer for blogs and writing, but we want to provide packaged information to people.
The group also spent a certain amount of time collecting a list of relevant events, conferences, workshops, etc, and see who in the Interest Group may be present on those. This may help in media outreach, synchronizing messages, etc. The plan is to put this list on the group’s Wiki page soon.
Charles LaPierre and Deborah Kaplan reported on the work of the Accessibility Task Force. The TF worked on a draft note. When we went through the existing W3C documents and looked at things that digital publishing said is important—we found a set of 8 items that can be addressed by existing W3C and WAI groups. We can say that “we would like a particular thing to be included”. We have precise things we need to ask—but we know specifically what to ask for. The future work section in the draft is more “someone is working on it and we want to follow it” or “this is going to require more work” and we really need to think about it.
Some additional accessibility issues came up during the discussion, related to logical reading order (CSS may make things look very different than in the document itself) or the overall problem of accessibility vs. (CSS) generated content. The latter is, potentially, a huge gap.
The group also spent some time on what the best way forward is in contacting the right experts in the WAI activity in the group. The plan is to talk to the relevant groups as soon as possible and, eventually, to publish the draft as a W3C Note. It is also possible that works will be done in direction of the WCAG Extension mechanism (the first draft in this direction has just been published at W3C).
A number of examples for complex tables with complex alignments have been collected, showing the difficulties and complexities of what is used in publishing. These tables will be, eventually, forwarded tot the CSS Working Group for further analysis, although the IG may have to find the right expert who can check whether those tables can be reproduced via HTML+CSS or whether there are gaps in the current specifications.
we went to my sister's as she has a badly sprained ankle. She got up and went down the stairs and drove around with me a while which was her first time out of the house since the injury. We brought over my walker and shower chair for her. she is wildly plotting how to manage her life and do everything. i watched her bump down, then crawl up, the stairs. she only sprained it what, thursday? it is huge rather like the illustration from the little prince of the snake that swallowed an elephant/hat. But with an elephant inside a formerly snake shaped foot and ankle.
Server-Side EconomicsIn "Phone Books and Megaphones", I talked about easy access to the megaphone. We can't just blame the people who eagerly pick up the megaphone when it's offered for the content of their speech -- we also have to look at the people who own the megaphone, and why they're so eager to lend it out.
It's not an accident that Internet companies are loathe to regulate harassment and defamation. There are economic incentives for the owners of communication channels to disseminate defamation: they make money from doing it, and don't lose money or credibility in the process. There are few incentives for the owners of these channels to maintain their reputations by fact-checking the information they distribute.
I see three major reasons why it's so easy for false information to spread:
- Economic incentives to distribute any information that gets attention, regardless of its truth.
- The public's learned helplessness in the face of software, which makes it easy for service owners to claim there's nothing they can do about defamation. By treating the algorithms they themselves implemented as black boxes, their designers can disclaim responsibility for the actions of the machines they set into motion.
- Algorithmic opacity, which keeps the public uninformed about how code works and makes it more likely they'll believe that it's "the computers fault" and people can't change anything.
Incentives and Trade-OffsConsider email spam as a cautionary tale. Spam and abuse are both economic problems. The problem of spam arose because the person who sends an email doesn't pay the cost of transmitting it to the recipient. This creates an incentive to use other people's resources to advertise your product for free. Likewise, harassers can spam the noosphere with lies, as they continue to do in the context of GamerGate, and never pay the cost of their mendacity. Even if your lies get exposed, they won't be billed to your reputation -- not if you're using a disposable identity, or if you're delegating the work to a crowd of people using disposable identities (proxy recruitment). The latter is similar to how spammers use botnets to get computers around the world to send spam for them, usually unbeknownst to the computers' owners -- except rather than using viral code to co-opt a machine into a botnet, harassers use viral ideas to recruit proxies.
In The Internet of Garbage, Sarah Jeong discusses the parallels between spam and abuse at length. She asks why the massive engineering effort that's been put towards curbing spam -- mostly successfully, at least in the sense of saving users from the time it takes to manually filter spam (Internet service providers still pay the high cost of transmitting it, only to be filtered out at the client side) -- hasn't been applied to the abuse problem. I think the reason is pretty simple: spam costs money, but abuse makes money. By definition, almost nobody wants to see spam (a tiny percentage of people do, which is why it's still rewarding for spammers to try). But lots of people want to see provocative rumors, especially when those rumors reinforce their sexist or racist biases. In "Trouble at the Koolaid Point", Kathy Sierra wrote about the incentives for men to harass women online: a belief that any woman who gets attention for her work must not deserve it, must have tricked people into believing her work has value. This doesn't create an economic incentive for harassment, but it does create an incentive -- meanwhile, if you get more traffic to your site and more advertising money because someone's using it to spread GamerGate-style lies, you're not going to complain. Unless you follow a strong ethical code, of course, but tech people generally don't. Putting ethics ahead of profit would betray your investors, or your shareholders.
If harassment succeeds because there's an economic incentive to let it pass through your network, we have to fight it economically as well. Moralizing about why you shouldn't let your platform enable harassment won't help, since the platform owners have no shame.
Creating these incentives matters. Currently, there's a world-writeable database with everyone's names as the keys, with no accounting and no authentication. A few people control it and a few people get the profits. We shrug our shoulders and say "how can we trace the person who injected this piece of false information into the system? There's no way to track people down." But somebody made the decision to build a system in which people can speak with no incentive to be truthful. Alternative designs are possible.
Autonomous Cars, Autonomous CodeAnother reason why there's so little economic incentive to control libel is that the public has a sort of learned helplessness about algorithms... at least when it's "just" information that those algorithms manipulate. We wouldn't ask why a search engine returns the top results that it returns for a particular query (unless we study information retrieval), because we assume that algorithms are objective and neutral, that they don't reproduce the biases of the humans who built them.
In part 2, I talked about why "it's just an algorithm" isn't a valid answer to questions about the design choices that underlie algorithms. We recognize this better for algorithms that aren't purely about producing and consuming information. We recognize that despite being controlled by algorithms, self-driving cars have consequences for legal liability. It's easy to empathize with the threat that cars pose to our lives, and we're correctly disturbed by the idea that you or someone you love could be harmed or killed by a robot who can't be held accountable for it. Of course, we know that the people who designed those machines can be held accountable if they create software that accidentally harms people through bugs, or deliberately harms people by design.
Imagine a self-driving car designer who programmed the machines to act in bad faith: for example, to take risks to get the car's passenger to their destination sooner at the potential expense of other people on the road. You wouldn't say "it's just an algorithm, right?" Now, what if people died due to unforeseen consequences of how self-driving car designers wrote their software rather than deliberate malice? You still wouldn't say, "It's just an algorithm, right?" You would hold the software designers liable for their failure to test their work adequately. Clearly, the reason why you would react the same way in the good-faith scenario as in the bad-faith one is the effect of the poor decision, rather than whether the intent was malicious or less careless.
Algorithms that are as autonomous as self-driving cars, and perhaps less transparent, control your reputation. Unlike with self-driving cars, no one is talking about liability for what happens when they turn your reputation into a pile of burning wreckage.
Algorithms are also incredibly flexible and changeable. Changing code requires people to think and to have discussions with each other, but it doesn't require much attention to the laws of physics and other than paying humans for their time, it has little cost. Exploiting the majority's lack of familiarity with code in order to act as if having to modify software is a huge burden is a good way to avoid work, but a bad way to tend the garden of knowledge.
Plausible DeniabilityDesigners and implementors of informational retrieval algorithms, then, enjoy a certain degree of plausible deniability that designers of algorithms to control self-driving cars (or robots or trains or medical devices) do not.
During the AmazonFail incident in which an (apparent) bug in Amazon's search software caused books on GLBT-related topics to be miscategorized as "adult" and hidden from searches, defenders of Amazon cried "It's just an algorithm." The algorithm didn't hate queer people, they said. It wasn't out to get you. It was just a computer doing what it had programmed to do. You can't hold a computer responsible.
"It's just an algorithm" is the natural successor to the magical intent theory of communication. Since your intent cannot be known to someone else (unless you tell them -- but then, you could lie about it), citing your good intent is often an effective way to dodge responsibility for bad actions. Delegating actions to algorithms takes the person out of the picture altogether: if people with power delegate all of their actions to inanimate objects, which lack intentionality, then no one (no one who has power, anyway) has to be responsible for anything.
"It's just an algorithm" is also a shaming mechanism, because it implies that the complainer is naïve enough to think that computers are conscious. But nobody thinks algorithms can be malicious. So saying, "it's just an algorithm, it doesn't mean you harm" is a response to something nobody said. Rather, when we complain about the outcomes of algorithms, we complain about a choice that was made by not making a choice. In the context of this article, it's the choice to not design systems with an eye towards their potential use for harassment and defamation and possible ways to mitigate those risks. People make this decision all the time, over and over, including for systems being designed today -- when there's enough past experience that everybody ought to know better.
Plausible deniability matters because it provides the moral escape hatch from responsibility for defamation campaigns, on the part of people who own search engines and social media sites. (There's also a legal escape hatch from responsibility, at least in the US: CDA Section 230, which shields every "provider or user of an interactive computer service" from liability for "any information provided by another information content provider.") Plausible deniability is the escape hatch, and advertising is the economic incentive to use that escape hatch. Combined with algorithm opacity, they create a powerful set of incentives for online service providers to profit from defamation campaigns. Anything that attracts attention to a Web site (and, therefore, to the advertisements on it) is worth boosting. Since there are no penalties for boosting harmful, false information, search and recommendation algorithms are amplifiers of false information by design -- there was never any reason to design them not to elevate false but provocative content.
TransparencyI've shown that information retrieval algorithms tend to be bad at limiting the spread of false information because doing the work to curb defamation can't be easily monetized, and because people have low expectations for software and don't hold its creators responsible for their actions. A third reason is that the lack of visibility of the internals of large systems has a chilling effect on public criticism of them.
Plausible deniability and algorithmic opacity go hand in hand. In "Why Algorithm Transparency is Vital to the Future of Thinking", Rachel Shadoan explains in detail what it means for algorithms to be transparent or opaque. The information retrieval algorithms I've been talking about are opaque. Indeed, we're so used to centralized control of search engines and databases that it's hard for them to imagine them being otherwise.
"In the current internet ecosystem, we–the users–are not customers. We are product, packaged and sold to advertisers for the benefit of shareholders. This, in combination with the opacity of the algorithms that facilitate these services, creates an incentive structure where our ability to access information can easily fall prey to a company’s desire for profit."In an interview, Chelsea Manning commented on this problem as well:
-- Rachel Shadoan
"Algorithms are used to try and find connections among the incomprehensible 'big data' pools that we now gather regularly. Like a scalpel, they're supposed to slice through the data and surgically extract an answer or a prediction to a very narrow question of our choosing—such as which neighborhood to put more police resources into, where terrorists are likely to be hiding, or which potential loan recipients are most likely to default. But—and we often forget this—these algorithms are limited to determining the likelihood or chance based on a correlation, and are not a foregone conclusion. They are also based on the biases created by the algorithm's developer....Opacity results from the ownership of search technology by a few private companies, and their desire not to share their intellectual property. If users were the customers of companies like Google, there would be more of an incentive to design algorithms that use heuristics to detect false information that damages people's credibility. Because advertisers are the customers, and because defamation generally doesn't affect advertisers negatively (unless the advertiser itself is being defamed), there is no economic incentive to do this work. And because people don't understand how algorithms work, and couldn't understand any of the search engines they used even if they wanted to (since the code is closed-source), it's much easier for them to accept the spread of false information as an inevitable consequence of technological progress.
These algorithms are even more dangerous when they happen to be proprietary 'black boxes.' This means they cannot be examined by the public. Flaws in algorithms, concerning criminal justice, voting, or military and intelligence, can drastically affect huge populations in our society. Yet, since they are not made open to the public, we often have no idea whether or not they are behaving fairly, and not creating unintended consequences—let alone deliberate and malicious consequences."
-- Chelsea Manning, BoingBoing interview by Cory Doctorow
Manning's comments, especially, show why the three problems of economic incentives, plausible deniability, and opacity are interconnected. Economics give Internet companies a reason to distribute false information. Plausible deniability means that the people who own those companies can dodge any blame or shame by assigning fault to the algorithms. And opacity means nobody can ask for the people who design and implement the algorithms to do better, because you can't critique the algorithm if you can't see the source code in the first place.
It doesn't have to be this way. In part 4, I'll suggest a few possibilities for making the Internet a more trustworthy, accountable, and humane medium.
To be continued.
Do you like this post? Support me on Patreon and help me write more like it.
Good afternoon. I'm Sumana Harihareswara, and I represent myself, and my firm Changeset Consulting http://changeset.nyc/ . I'm here to discuss some things we can learn from comparing antiharassment policies, or community codes of conduct, to copyleft software licenses such as the GPL. I'll be laying out some major similarities and differences, especially delving into how these different approaches give us insight about common community attitudes and assumptions. And I'll lay out some lessons we can apply as we consider and advocate various sides of these issues, and potentially to apply to some other topics within free and open source software as well.
My notes will all be available online after this, so you don't have to scramble to write down my brilliant insights, or, more likely, links. And I don't have any slides. If you really need slides, I'm sorry, and if you're like, YES! then just bask in the next twenty-five minutes.
( Text of my notes )
Phone Books and MegaphonesThink back to 1986. Imagine if somebody told you: "In 30 years, a public directory that's more accessible and ubiquitous than the phone book is now will be available to almost everybody at all times. This directory won't just contain your contact information, but also, a page anyone can write on, like a middle-school slam book but meaner. Whenever anybody writes on it, everybody else will be able to see what they wrote." I don't thin you would have believed it, or if you found it plausible, you probably wouldn't have found this state of affairs acceptable. Yet in 2016, that's how things are. Search engine results have an enormous effect on what people believe to be true, and anybody with enough time on their hands can manipulate search results.
Antisocial Network EffectsWhen you search for my name on your favorite search engine, you'll find some results that I wish weren't closely linked to my name. People who I'd prefer not to think about have written blog posts mentioning my name, and those articles are among the results that most search engines will retrieve if you're looking for texts that mention me. But that pales in comparison with the experiences of many women A few years ago, Skud wrote:
"Have you ever had to show your male colleagues a webpage that calls you a fat dyke slut? I don’t recommend it."
Imagine going a step further: have you ever had to apply for jobs knowing that if your potential manager searches for your name online, one of the first hits will be a page calling you a fat dyke slut? In 2016, it's pretty easy for anybody who wants to to make that happen to somebody else, as long as the target isn't unusually wealthy or connected. Not every potential manager is going to judge someone negatively just because someone called that person a fat dyke slut on the Internet, and in fact, some might judge them positively. But that's not the point -- the point is if you end up in the sights of a distributed harassment campaign, then one of the first things your potential employers will know about you, possibly for the rest of your life, might be that somebody called you a fat dyke slut. I think most of us, if we had the choice, wouldn't choose that outcome.
Suppose the accusation isn't merely a string of generic insults, but something more tangible: suppose someone decides to accuse you of having achieved your professional position through "sleeping your way to the top," rather than merit. This is a very effective attack on a woman's credibility and competence, because patriarchy primes us to be suspicious of women's achievements anyway. It doesn't take much to tip people, even those who don't consciously hold biases against women, into believing these attacks, because we hold unconscious biases against women that are much stronger than anyone's conscious bias. It doesn't matter if the accusation is demonstrably false -- so long as somebody is able to say it enough times, the combination of network effects and unconscious bias will do the rest of the work and will give the rumor a life of its own.
Not every reputation system has to work the way that search engines do. On eBay, you can only leave feedback for somebody else if you've sold them something or bought something from them. In the 17 years since I started using eBay, that system has been very effective. Once somebody accumulates social capital in the form of positive feedback, they generally don't squander that capital. The system works because having a good reputation on eBay has value, in the financial sense. If you lose your reputation (by ripping somebody off), it takes time to regain it.
On the broader Internet, you can use a disposable identity to generate content. Unlike on eBay, there is no particular reason to use a consistent identity in order to build up a good track record as a seller. If your goal is to build a #personal #brand, then you certainly have a reason to use the same name everywhere, but if your goal is to destroy someone else's, you don't need to do that. The ready availability of disposable identities ("sockpuppets") means that defaming somebody is a low-risk activity even if your accusations can be demonstrated false, because by the time somebody figures out you made your shit up, you've moved on to using a new name that isn't sullied by a track record of dishonesty. So there's an asymmetry here: you can create as many identities as you want, for no cost, to destroy someone else's good name, but having a job and functioning in the world makes it difficult to change identities constantly.
The MegaphoneFor most of the 20th century, mass media consisted of newspapers, then radio and then TV. Anybody could start a newspaper, but radio and TV used the broadcast spectrum, which is a public and scarce resource and thus is regulated by governmental agencies. Because the number of radio and TV channels was limited, telecommunications policy was founded on the assumption that some amount of regulation of these channels' use was necessary and did not pose an intrinsic threat to free speech. The right to use various parts of the broadcast spectrum was auctioned off to various private companies, but this was a limited-scope right that could be revoked if those companies acted in a way that blatantly contravened the public interest. A consistent pattern of deception would have been one thing that went against the public interest. As far as I know, no radio or TV broadcaster ever embarked upon a deliberate campaign of defaming multiple people, because the rewards of such an activity wouldn't offset the financial losses that would be inevitably incurred when the lies were exposed.
(I'll use "the megaphone" as a shorthand for media that are capable of reaching a lot of people: formerly, radio and broadcast TV; then cable TV; and currently, the Internet. Not just "the Internet", though, but rather: Internet credibility. Access to the credible Internet (the content that search engine relevance algorithms determine should be centered in responses to queries) is gatekept by algorithms; access to old media was gatekept by people.)
At least until the advent of cable TV, then, the broader the reach of a given communication channel, the more closely access to that channel was monitored and regulated. It's not that this system always worked perfectly, because it didn't, just that there was more or less consensus that it was correct for the public to have oversight with respect to who could be entrusted with access to the megaphone.
Now that access to the Internet is widespread, the megaphone is no longer a scarce resource. In a lot of ways, that's a good thing. It has allowed people to speak truth to power and made it easier for people in marginalized groups to find each other. But it also means that it's easy to start a hate campaign based on falsehoods without incurring any personal risk.
I'm not arguing against anonymity here. Clearly, at least some people have total freedom to act in bad faith while using the names they're usually known by: Milo Yiannopoulos and Andrew Breitbart are obvious examples. If use of real names deters harassment, why are they two of the best-known names in harassment?
Algorithm as ExcuseZoë Quinn pointed out on Twitter that she can no longer share content with her friends, even if she limits access to it, because her name is irrevocably linked to the harassment campaign that her ex-boyfriend started in order to defame her in 2014, otherwise known as GamerGate. If she uses YouTube to share videos, its recommendation engine will suggest to her friends that they watch "related" videos that -- at best -- attack her for her gender and participation in the game development community. There is no individual who works for Google (YouTube's parent company) who made an explicit decision to link Quinn's name with these attacks. Nonetheless, a pattern in YouTube's recommendations emerged because of a concerted effort by a small group of dedicated individuals to pollute the noosphere in order to harm Quinn. If you find this outcome unacceptable, and I do, we have to consider the chain of events that led to it and ask which links in the chain could be changed so this doesn't happen to someone else in the future.
There is a common line of response to this kind of problem: "You can't get mad at algorithms. They're objective and unbiased." Often, the implication is that the person complaining about the problem is expecting computers to be able to behave sentiently. But that's not the point. When we critique an algorithm's outcome, we're asking the people who design and maintain the algorithms to do better, whether the outcome is that it uses too much memory or that it causes a woman to be re-victimized every time someone queries a search engine for her name. Everything an algorithm does is because of a design choice that one or several humans made. And software exists to serve humans, not the other way around: when it doesn't do what we want, we can demand change, rather than changing ourselves so that software developers don't have to do their jobs. By saying "it's just an algorithm", we can avoid taking responsibility for our values as long as we encode those values as a set of rules executable by machine. We can automate disavowal.
How did we get here -- to a place where anyone can grab the megaphone, anyone can scribble in the phone book, and people who benefit from the dissemination of this false information are immune from any of the risks? I'll try to answer that in part 3.
To be continued.
Do you like this post? Support me on Patreon and help me write more like it.