Speed of Light
I’m Brianna Wu, And I’m Risking My Life Standing Up To Gamergate  

Brianna Wu:

This weekend, a man wearing a skull mask posted a video on YouTube outlining his plans to murder me. I know his real name. I documented it and sent it to law enforcement, praying something is finally done. I have received these death threats and 43 others in the last five months.

This experience is the basis of a Law & Order episode airing Wednesday called the “Intimidation Game.” I gave in and watched the preview today. The main character appears to be an amalgamation of me, Zoe Quinn, and Anita Sarkeesian, three of the primary targets of the hate group called GamerGate.

My name is Brianna Wu. I develop video games for your phone. I lead one of the largest professional game-development teams of women in the field. Sometimes I speak out on women in tech issues. I’m doing everything I can to save my life except be silent.

The week before last, I went to court to file a restraining order against a man who calls himself “The Commander.” He made a video holding up a knife, explaining how he’ll murder me “Assassin’s Creed Style.” He wrecked his car en route to my house to “deliver justice.” In logs that leaked, he claimed to have weapons and a compatriot to do a drive-by.

Awful, disturbing stuff.

I’ll also remind you you don’t have to be making death and rape threats to be a part of sexism in tech. The hatred of women has got to stop.

See also the “top stories” at the bottom of essay. This representation of women only obsessed with their looks is pretty toxic to both men and women, too.

The Dynabook and the App Store

Yesterday I linked to J. Vincent Toups’ 2011 Duckspeak Vs Smalltalk, an essay about how far, or really how little, we’ve come since Alan Kay’s Dynabook concept, and a critique of the limitations inherent in today’s App Store style computing.

A frequent reaction to this line of thought is “we shouldn’t make everyone be a programmer just to use a computer.” In fact, after Loren Brichter shared the link on Twitter, there were many such reactions. While I absolutely agree abstractions are a good thing (e.g., you shouldn’t have to understand how electricity works in order to turn on a light), one of the problems with computers and App Stores today is we don’t even have the option of knowing how the software works even if we wanted.

But the bigger problem is what our conception of programming is today. When the Alto computer was being researched at Xerox, nobody was expecting people to program like we do today. Javascript, Objective C, and Swift (along with all the other “modern” languages today) are pitiful languages for thinking, and were designed instead for managing computer resources (Javascript, for example, was thoughtlessly cobbled together in just ten days). The reaction of “people shouldn’t have to program to use a computer” hinges on what it means to program, and what software developers think of programming is vastly different from what the researchers at Xerox had in mind.

Programming, according to Alan Kay and the gang, was a way for people to be empowered by computers. Alan correctly recognized the computer as a dynamic medium (the “dyna” in “Dynabook") and deemed it crucial people be literate with this medium. Literacy, you’ll recall, means being able to read and write in a medium, to be able to think critically and reason with a literature of great works (that’s the “book” in “Dynabook"). The App Store method of software essentially neuters the medium into a one-way consumption device. Yes, you can create on an iPad, but the system’s design language does not allow for creation of dynamic media.

Nobody is expecting people to have to program a computer in order to use it, but the PARC philosophy has at its core a symmetric concept of creation as well as consumption. Not only are all the parts of Smalltalk accessible to any person, but all the parts are live, responsive, active objects. When you need to send a live, interactive model to your colleague or your student, you sent the model, not an attachment, not a video or a picture, but the real live object. When you need to do an intricate task, you don’t use disparate “apps” and pray the developers have somehow enabled data sharing between, but you actually combine the parts yourself. That’s the inherent power in the PARC model that we’ve completely eschewed in modern operating systems.

Smalltalk and the Alto were far from perfect, and I’ll be the last to suggest we use them as is. But I will suggest we understand the philosophy and the desires to empower people with computers and use that understanding to build better systems. I’d highly recommend reading Alan’s Early History of Smalltalk and A Personal Computer for Children of All Ages to learn what the personal computer was really intended to be.

Language in Everyday Life  

Ash Furrow on discriminating language:

Recently, I’ve been examining the language I use in the context of determining if I may be inadvertently hurting anyone. For instance, using “insane” or “crazy” as synonyms for “unbelievable” probably doesn’t make people suffering from mental illness feel great. […]

Pretty straightforward. There are terms out there that are offensive to people who identify as members of groups that those terms describe. The terms are offensive primarily because they connote negativity beyond the meaning of the word. […]

To me, the bottom-line is that these words are hurtful and there are semantically identical synonyms to use in their place, so there is no reason to continue to use them. Using the terms is hurtful and continuing to use them when you know they’re hurtful is kind of a dick move.

Ash published this article in October 2014 and it’s been on my mind ever since. It makes total sense to me, and I’ve been trying hard to remove these words from my vocabulary. It takes time, especially considering how pervasive they are, but it’s important. If you substitute the word “magical” in for any of the bad words, it makes your sentences pretty delightful, and shows how banal the original words really are, as we over-use them anyway.

The Decline of the Xerox PARC Philosophy at Apple Computers  

J. Vincent Toups on the Dynabook:

While the Dynabook was meant to be a device deeply rooted in the ethos of active education and human enhancement, the iDevices are essentially glorified entertainment and social interaction (and tracking) devices, and Apple controlled revenue stream generators for developers. The entire “App Store” model, then works to divide the world into developers and software users, whereas the Xerox PARC philosophy was for there to be a continuum between these two states. The Dynabook’s design was meant to recruit the user into the system as a fully active participant. The iDevice is meant to show you things, and to accept a limited kind of input - useful for 250 character Tweets and Facebook status updates, all without giving you the power to upset Content Creators, upon whom Apple depends for its business model. Smalltalk was created with the education of adolescents in mind - the iPad thinks of this group as a market segment. […]

It is interesting that at one point, Jobs (who could not be reached for comment [note: this was written before Jobs’ death]) described his vision of computers as “interpersonal computing,” and by that standard, his machines are a success. It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured. In a world where centralized technology like Google can literally give you a good guess at any piece of human knowledge in milliseconds, its a real tragedy that the immense power of cheap, freely available computational systems remains locked behind opaque interfaces, obscure programming languages, and expensive licensing agreements.

The article is also great because it helps dispel the myth that Apple took “Xerox’s rough unfinished UI and polished it for the Mac.” It’s closer to the truth to say Apple dramatically stripped the Smalltalk interface of its functionality that resulted in a toy, albeit cheaper, personal computer.

Why I Just Asked My Students To Put Their Laptops Away  

Renowned internet media proponent and NYU professor, Clay Shirky:

Over the years, I’ve noticed that when I do have a specific reason to ask everyone to set aside their devices (‘Lids down’, in the parlance of my department), it’s as if someone has let fresh air into the room. The conversation brightens, and more recently, there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.

So this year, I moved from recommending setting aside laptops and phones to requiring it, adding this to the class rules: “Stay focused. (No devices in class, unless the assignment requires it.)” Here’s why I finally switched from ‘allowed unless by request’ to ‘banned unless required’. […]

Worse, the designers of operating systems have every incentive to be arms dealers to the social media firms. Beeps and pings and pop-ups and icons, contemporary interfaces provide an extraordinary array of attention-getting devices, emphasis on “getting.” Humans are incapable of ignoring surprising new information in our visual field, an effect that is strongest when the visual cue is slightly above and beside the area we’re focusing on. (Does that sound like the upper-right corner of a screen near you?)

The form and content of a Facebook update may be almost irresistible, but when combined with a visual alert in your immediate peripheral vision, it is—really, actually, biologically—impossible to resist. Our visual and emotional systems are faster and more powerful than our intellect; we are given to automatic responses when either system receives stimulus, much less both. Asking a student to stay focused while she has alerts on is like asking a chess player to concentrate while rapping their knuckles with a ruler at unpredictable intervals. […]

Computers are not inherent sources of distraction — they can in fact be powerful engines of focus — but latter-day versions have been designed to be, because attention is the substance which makes the whole consumer internet go.

The fact that hardware and software is being professionally designed to distract was the first thing that made me willing to require rather than merely suggest that students not use devices in class. There are some counter-moves in the industry right now — software that takes over your screen to hide distractions, software that prevents you from logging into certain sites or using the internet at all, phones with Do Not Disturb options — but at the moment these are rear-guard actions. The industry has committed itself to an arms race for my students’ attention, and if it’s me against Facebook and Apple, I lose. […]

The “Nearby Peers” effect, though, shreds that rationale. There is no laissez-faire attitude to take when the degradation of focus is social. Allowing laptop use in class is like allowing boombox use in class — it lets each person choose whether to degrade the experience of those around them.

Explorable Explanations  

Nicky Case on explorable explanations:

This weekend, I attended a small 20-person workshop on figuring out how to use interactivity to help teach concepts. I don’t mean glorified flash cards or clicking through a slideshow. I mean stuff like my 2D lighting tutorial, or this geology-simulation textbook, or this explorable explanation on explorable explanations.

Over the course of the weekend workshop, we collected a bunch of design patterns and considerations, which I’ve made crappy diagrams of, as seen below. Note: this was originally written for members of the workshop, so there’s a lot of external references that you might not get if you weren’t there.

This is great because it’s full of examples about why and when and what to make things explorable.

Design & Compromise  

Mills Baker:

In ten years, when America’s health care system is still a hideous, tragic mess, Republicans will believe that this is due to the faulty premises of Democratic legislation, while Democrats will believe that the legislation was fatally weakened by obstinate Republicans. While we can of course reason our way to our own hypotheses, we will lack a truly irrefutable conclusion, the sort we now have about, say, whether the sun revolves around the earth.

Thus: a real effect of compromise is that it prevents intact ideas from being tested and falsified. Instead, ideas are blended with their antitheses into policies that are “no one’s idea of what will work,” allowing the perpetual political regurgitation, reinterpretation, and relational stasis that defines the governance of the United States.

Baker goes on to detail how compromise results in similarly poor designs and argues in favour of the auteur. Compromise can often be interpreted like racial colourblindness, where instead each voice needs to shine of its own merits.

Worse still, in my experience, compromise is often a negative feedback loop: it’s difficult to convince an organization it should stop compromising, when it can only agree to things by compromising. It’s poison in the institutional well.

Bret Victor’s “The Humane Representation of Thought”

Right before Christmas, Bret Victor released his newest talk, “The Humane Representation of Thought”, summing up the current state of his long reaching and far seeing visionary research. I’ve got lots of happy-but-muddled thoughts about it, but suffice it to say, I loved it. If you like his work, you’ll love this talk.

He also published some notes on the talk about the thinking that led to the talk.

Shortly after the talk was published, Wired did a story about him and his work, although nothing too in depth, offers a nice overview of the talk and his work.

Finally, somehow I missed this profile of Bret and his research by John Pavlus published in early December.

On Greatness  

Glen Chiacchieri on the Greats:

There seem to be a few unifying components of greatness. The first is willingness to work hard. No one has become great by surfing the internet. Anyone who you would consider great has most likely achieved their status through sheer hard work, not necessarily their genius. In fact, their genius probably came after the fact, as a result of their work, rather than through any latent brilliance that was lurking beneath the surface, ready to be sprung upon the world. […]

Great people don’t just sit around thinking, either; they’re creating. Without creating something, there would be no sign of their greatness. They would be another cog in the mass machine, churning ideas. They’d be addicted to brain crack. But they’re not. Great people make things: books, songs, blogonet posts, videos, programs, stories, websites. Their greatness is in their creations. Their creations show signs of the hard work and unique genius of the person making it.

I’ll add something sort of parallel to what Glen is saying here: Great people make their work better and more clearly by doing a lot of it.

I’m not particularly after greatness (it’s fine if you are), but I’ve struggled a lot in the recent years with ideas I “know” be true in my mind, but that I can’t easily express clearly either in written form or by speaking. That’s a problem, because whatever potential greatness there might be, nobody else can understand it (that, and it’s probably just actually less great because it isn’t clear, even to me, yet).

Great people who create a lot hone their skills of expressing and refining their ideas.

It’s a Shame

Textual programming languages are mostly devoid of structure — any structure that exists actually exists in our heads — and we have to many organize our code around that (modern editors offer us syntax highlighting and auto-indentation, which help, but we still have to maintain the structure mentally).

New programmers often write “spaghetti code” or don’t use variables or don’t use functions. We teach them about these features but they often don’t make the connection on why they’re useful. How many times have you seen a beginner programmer say “Why would I use that?” after you explain to him or her a new feature? Actually, how many times have you seen the opposite? I’ll bet almost never.

I have a hunch this is sort of related to why development communities often shame their members so much. There are a lot of mean spirits around. There’s a lot of “Why the hell would you ever use X?? Don’t you know about Y? Or why X is so terrible? It’s bad and you should feel bad.” We have way too much of that, and my hunch is this is (partly) related to our programming environments being entirely structureless.

When the medium you work in doesn’t actually encourage powerful ways of getting the work done, the community fills in the gaps by shaming its members into “not doing it wrong.”

I could be wrong, and even if I’m right, I’m not saying this is excusable. But I do think we’re quite well known for being a shame culture, and I think we do that in order to keep our heads from exploding. We shame until we believe, and we believe until we understand. Perhaps our environments should help us understand better in the first place, and we can leave the shaming behind.

Citation Needed  

Mike Hoye on why most programming languages use zero-indexed arrays:

Whatever justifications or advantages came along later – and it’s true, you do save a few processor cycles here and there and that’s nice – the reason we started using zero-indexed arrays was because it shaved a couple of processor cycles off of a program’s compilation time. Not execution time; compile time.[…]

I can tell nobody has ever actually looked this up.

Whatever programmers think about themselves and these towering logic-engines we’ve erected, we’re a lot more superstitious than we realize. We tell and retell this collection of unsourced, inaccurate stories about the nature of the world without ever doing the research ourselves, and there’s no other word for that but “mythology”. Worse, by obscuring the technical and social conditions that led humans to make these technical and social decisions, by talking about the nature of computing as we find it today as though it’s an inevitable consequence of an immutable set of physical laws, we’re effectively denying any responsibility for how we got here. And worse than that, by refusing to dig into our history and understand the social and technical motivations for those choices, by steadfastly refusing to investigate the difference between a motive and a justification, we’re disavowing any agency we might have over the shape of the future. We just keep mouthing platitudes and pretending the way things are is nobody’s fault, and the more history you learn and the more you look at the sad state of modern computing the the more pathetic and irresponsible that sounds.[…]

The second thing is how profoundly resistant to change or growth this field is, and apparently has always been. If you haven’t seen Bret Victor’s talk about The Future Of Programming as seen from 1975 you should, because it’s exactly on point. Over and over again as I’ve dredged through this stuff, I kept finding programming constructs, ideas and approaches we call part of “modern” programming if we attempt them at all, sitting abandoned in 45-year-old demo code for dead languages. And to be clear: that was always a choice. Over and over again tools meant to make it easier for humans to approach big problems are discarded in favor of tools that are easier to teach to computers, and that decision is described as an inevitability.

The masculine mistake  

What is so broken inside American men? Why do we make so many spaces unsafe for women? Why do we demand that they smile as we harass them - and why, when women bring the reality of their everyday experiences into the open, do we threaten to kill them for it?

If you’re a man reading this, you likely feel defensive by now. I’m not one of those guys, you might be telling yourself. Not all men are like that. But actually, what if they are? And what if men like you telling yourselves that you’re not part of the problem is itself part of the problem?

We’ve all seen the video by now. “Smile,” says the man, uncomfortably close. And then, more angrily, “Smile!”

An actress, Shoshana Roberts, spends a day walking through New York streets, surreptitiously recorded by a camera. Dozens of men accost her; they comment on her appearance and demand that she respond to their “compliments.” […]

This is a huge problem. And unfortunately, it’s but one symptom of a larger issue.

Why do men do this? How can men walk down the same streets as women, attend the same schools, play the same games, live in the same homes, be part of the same families - yet either not realize or not care how hellish we make women’s lives?

One possible answer: Straight American masculinity is fundamentally broken. Our culture socializes young men to believe that they are entitled to sexual attention from women, and that women go about their lives with that as their primary purpose - as opposed to just being other people, with their own plans, priorities and desires.

We teach men to see women as objects, not other human beings. Their bodies are things men are entitled to: to judge, to assess, and to dispose of - in other words, to treat as pornographic playthings, to have access to and, if the women resist, to threaten, to destroy.

We raise young boys to believe that if they are not successful at receiving sexual attention from women, then they are failures as men. Bullying is merciless in our culture, and is heaped upon geeky boys by other young men in particular (and all the more so against boys who do not appear straight).

But because young men are taught to despise vulnerability, in themselves and in others, they instead turn that hatred upon those who are already more vulnerable - women and others - with added intensity. Put differently, and without in any way excusing their monstrous behavior, young men are given unrealistic expectations, taught to hate themselves when reality falls short - and then to blame women for the whole thing.

I’m reminded of this excellent and positive TED talk about a need to give boys better stories. We need more stories where “the guy doesn’t get the girl in the end, and he’s OK with that.” We need to teach boys this is a good outcome, that boys aren’t entitled to girls.

If you’re shopping for presents for boys this Christmas, I implore you to keep this in mind. Don’t buy them a story of a prince or a hero who “gets the girl.”

An Educated Guess, the Video  

At long last, the video of my conference talk from NSNorth 2013 wherein I unveiled the Cortex system:

Modern software development isn’t all that modern. Its origins are rooted in the original Macintosh, an era and environment lacking networking, slow processors with limited memory, and almost no collaboration between developers or the processes they wrote. Today, software is developed as though these constraints still remain.

We need a modern approach to building better software. Not just incremental improvements but fundamental leaps forward. This talk presents frameworks at the sociological, conceptual, and programmatic levels to rethink how software should be made, enabling a giant leap to better software.

I haven’t been able to bring myself to watch myself talk yet. Watch it and tell me how it went?

Goodbye, Twitter  

Geoff Pado:

It finally hit me: the way I felt wasn’t “people on Twitter are jerks,” it was “people are jerks on Twitter.” After this epiphany, and a brief hiatus to see if I could even break my own habits, I’ve made my decision: I’m getting off of Twitter, effective immediately. A link to this blog post will be my final tweet, and I’m only going to watch for replies until Tuesday. As part of my hiatus, I’ve already deleted all my Twitter apps from all my devices, and I’ll be scrambling my password on Tuesday to even prevent myself from logging in without going through the “forgot my password” hoops.

Sounds good to me.

Christmasmania

It’s November 23, 2014. In Brooklyn, New York, it’s getting colder as we inch closer to Winter. The leaves are still falling, but the snow isn’t. Nor is there any snow on the ground. But if you listen closely, you can hear a disturbing sound.

It’s thirty-two days until Christmas, and the grocery stores are already playing Christmas music. The streets are already decorated, and Starbucks has all the ornaments and “Holiday Flavors” out in full swing. It’s a week before American Thanksgiving.

This is Christmasmania.

Let’s look at Christmasmania for a moment. We’ve started celebrating a holiday that comes once a year thirty-two days before it’s arrived. We’ll likely celebrate it for a week after the day, too. That’s almost forty days of Christmas, every year. Let’s look at this another way.

Conservatively, let’s say we spend one month per year in Christmasmania. One month per year is one twelfth of a year. Let’s pretend we live in a land of Christmasmania where instead of spending one month of the year, one twelfth of a year devoted to the “holiday spirit”, we instead spent two hours (2/24 hours = 1/12 of a day) of every single day of the year in Christmasmania.

Every single day, between the hours of 6 and 8 PM, families don their yuletide sweaters, pour each other cups of eggnog, and listen to a few hours of Christmas carols. They’ll spend a few minutes shopping for that perfect gift, they’ll spend a few minutes wrapping it, and they’ll keep it under the tree for half an hour or so. The kids will watch Youtube clips of Rudolf and how the Grinch Stole Christmas. And maybe if they’re good, the kids will get to open a present before being sent off to bed, to have visions of sugarplums dance in their heads.

Two hours of Christmasmania. Every day.

Here’s the really insidious thing about Christmasmania. It’s not that the decorations go up during Halloween. It’s not that Starbucks has eggnog flavoured napkins before Remembrance and Veterans’ Day. It’s not that the same garbage Christmas songs are recycled and re-recorded by the pop-royalty-du-jour and pumped out of every shopping centre speaker before Americans even have a chance to be thankful. It’s not the over commercialized nature of “finding the perfect gift for that special someone.”

No, what’s really insidious about Christmasmania is how self-perpetuating and reinforcing it is. For the Christmasmania virus to survive, it must take control of its host, but not kill its host.

Christmasmania, also known as “the Holiday Spirit,” requires its hosts to keep one another in line. Every single of the numerous Christmas movies (of which Christmasmania dictates we watch at least a few) has at least one social outcast, the “grinch”, who simply does not like Christmas. We are taught to despise this grinch, to pity this grinch, and to rehabilitate the grinch so that he or she can see the “true meaning of Christmas” and get into the “Holiday Spirit.” “If you don’t like Christmas,” the mania tells us, “there’s something wrong with you, because nothing can be wrong with Christmas. Do you like giving?” Don’t you like shopping?

I think Christmas can be a wonderful celebration, a special time to be close with your family and loved ones you might otherwise not get throughout the rest of the year, and that’s a great thing. But the problem is when we as a whole are programmed and forced to buy in to the mania that surrounds it, the celebration becomes lost in a morass of stop-motion candy-cane flavoured Bing Crosby songs. So this Christmas remember your loved ones. They’re the real present.

Alan Kay’s Commentary on “A Personal Computer For Children Of All Ages”  

So many great gems in here:

The next year I visited Seymour Papert, Wally Feurzig, and Cynthia Solomon to see the LOGO classroom experience in the Lexington schools. This was a revelation! And was much more important to me than the metaphors of “tools” and “vehicles” that were central to the ARPA way of characterizing its vision. This was more like the “environment of powerful epistemology” of Montessori, the “environment of media” of McLuhan, and even more striking: it evoked the invention of the printing press and all that it brought. This was not just “augmenting human intellect”, but the “early shaping of human intellect”. This was a “cosmic service idea”. […]

At this first brush, the service model was: facilitate children “learning the world by constructing it” via an interactive graphical interface to an “object- oriented-simulation-oriented-LOGO-like-language.

A few years later at Xerox PARC I wrote “A Personal Computer For Children Of All Ages”. This was written mostly to start exploring in more depth the desirable services that should be offered. I.e. what should a Dynabook enable? And why should it enable it?

The first context was “everything that ARPA envisioned for adults but in a form that children could also learn and use”. The analogy here was to normal language learning in which children are not given a special “children’s language” but pick up speaking, reading and writing their native language directly through subsets of both the content and the language. In practice for the Dynabook, this required inventing better languages and user interfaces for adults that could also be used for children (this is because most of the paraphernalia for adults in those days was substandard for all). […]

Back then, it was in the context that “education” meant much more than just competing for jobs, or with the Soviet Union; how well “real education” could be accomplished was the very foundation of how well a democratic federal republic could carry out its original ideals.

[Thomas] Jefferson’s key idea was that a general population that has learned to think and has acquired enough knowledge will be able to dynamically steer the “ship of state” through the sometimes rough waters of the future and its controversies (and conversely, that the republic will fail if the general population is not sufficiently educated).

An important part of this vision was that the object of education was not to produce a single point of view, but to produce citizens who could carry out the processes of reconciling different points of view.

If most Americans today were asked “why education?”, it’s a safe bet that most would say “to help get a good job” or to “help make the US more competitive worldwide” (a favorite of our recent Presidents). Most would not mention the societal goal of growing children into adults who will be “enlightened enough to exercise their control with a wholesome discretion” or to understand that they are the “true corrective of abuses of … power”.

Goldieblox Introduces an Action Figure for Girls  

Goldieblox:

Research shows that girls who play with fashion dolls see fewer career options for themselves than boys (see study) . One fashion doll is sold every three seconds. Girls’ feet are made for high-tops, not high heels…it’s time for change.

Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaawesome.

Thoughts About Teaching Science and Mathematics To Young Children [PDF]  

Mind-opening thoughts about early education of math and science by Alan Kay:

Scientists escape to a large extent from simple belief by having done enough real experimentation, modeling building using mathematics that suggests new experiments, etc., to realize that science is more like map-making for real navigators than bible-making: IOW, the maps need to be as accurate as possible with annotations for errors and kinds of measurements, done by competent map-makers rather than story tellers, and they are always subject to improvement and rediscovery: they never completely represent the territory they are trying to map, etc.

Many of us who having been learning how to help children become scientists (that is to be able to think and act as scientists some of the time) have gathered evidence which shows that helping children actually do real science at the earliest possible ages is the best known way to help them move from simple beliefs in dogma to the more skeptical, empirically derived models of science.[…]

There is abundant evidence that helping children move from human built-in heuristics and the commonsense of their local culture to the “uncommonsense” and heuristic thinking of science, math, etc., is best done at the earliest possible ages. This presents many difficulties ranging from understanding how young children think to the very real problem that “the younger the children, the more adept need to be their mentors (and the opposite is more often the case)”.[…]

So, for young and youngish children (say from 4 to 12) we still have a whole world of design problems. For one thing, this is not an homogenous group. Cognitively and kinesthetically it is at least two groups (and three groupings is an even better fit). So, we really think of three specially designed and constructed environments here, where each should have graceful ramps into the next one.

The current thresholds exclude many designs, but more than one kind of design could serve. If several designs could be found that serve, then we have a chance to see if the thresholds can be raised. This is why we encourage others to try their own comprehensive environments for children. Most of the historical progress in this area has come from a number of groups using each other’s ideas to make better attempts (this is a lot like the way any science is supposed to work). One of the difficulties today is that many of the attempts over the last 15 or so years have been done with too low a sense of threshold and thus start to clog and confuse the real issues.

I think one of the trickiest issues in this kind of design is an analogy to the learning of science itself, and that is “how much should the learners/users have to do by themselves vs. how much should the curriculum/system do for them?” Most computer users have been mostly exposed to “productivity tools” in which as many things as possible have been done for them. The kinds of educational environments we are talking about here are at their best when the learner does the important parts by themselves, and any black or translucent boxes serve only on the side and not at the center of the learning. What is the center and what is the side will shift as the learning progresses, and this has to be accommodated.

OTOH, the extreme build it from scratch approach is not the best way for most minds, especially young ones. The best way seems to be to pick the areas that need to be from scratch and do the best job possible to make all difficulties be important ones whose overcoming is the whole point of the educational process (this is in direct analogy to how sports and music are taught – the desire is to facilitate a real change for the better, and this can be honestly difficult for the learner).

The Fantasy and Abuse of the Manipulable User  

Some quotes from Betsy Haibel’s must read essay. If you make or use software, you should read it.

Deceptive linking practices – from big flashing “download now” buttons hovering above actual download links, to disguising links to advertising by making them indistinguishable from content links – may not initially seem like violations of user consent. However, consent must be informed to be meaningful – and “consent” obtained by deception is not consent.

Consent-challenging approaches offer potential competitive benefits. Deceptive links capture clicks – so the linking site gets paid. Harvesting of emails through automatic opt-in aids in marketing and lead generation. While the actual corporate gain from not allowing unsubscribes is likely minimal – users who want to opt out are generally not good conversion targets – individuals and departments with quotas to meet will cheer the artificial boost to their mailing list size.

These perceived and actual competitive advantages have led to violations of consent being codified as best practices, rendering them nigh-invisible to most tech workers. It’s understandable – it seems almost hyperbolic to characterize “unwanted email” as a moral issue. Still, challenges to boundaries are challenges to boundaries. If we treat unwanted emails, or accidentally clicked advertising links, as too small a deal to bother, then we’re asserting that we know better than our users what their boundaries are. In other words, we’re placing ourselves in the arbiter-of-boundaries role which abuse culture assigns to “society as a whole.”[…]

The industry’s widespread individual challenges to user boundaries become a collective assertion of the right to challenge – that is, to perform actions which are known to transgress people’s internally set or externally stated boundaries. The competitive advantage, perceived or actual, of boundary violation turns from an “advantage” over the competition into a requirement for keeping up with them.

Individual choices to not fall behind in the arms race of user mistreatment collectively become the deliberate and disingenuous cover story of “but everyone’s doing it.”[…]

The hacker mythos has long been driven by a narrow notion of “meritocracy.” Hacker meritocracy, like all “meritocracies,” reinscribes systems of oppression by victim-blaming those who aren’t allowed to succeed within it, or gain the skills it values. Hacker meritocracy casts non-technical skills as irrelevant, and punishes those who lack technical skills. Having “technical merit” becomes a requirement to defend oneself online. […]

It’s easy to bash Zynga and other manufacturers of cow clickers and Bejeweled clones. However, the mainstream tech industry has baked similar compulsion-generating practices into its largest platforms. There’s very little psychological difference between the positive-reinforcement rat pellet of a Candy Crush win and that of new content in one’s Facebook stream.[…]

I call on my fellow users of technology to actively resist this pervasive boundary violation. Social platforms are not fully responsive to user protest, but they do respond, and the existence of actual or potential user outcry gives ethical tech workers a lever in internal fights about user abuse.

Facebook Rooms  

Inspired by both the ethos of these early web communities and the capabilities of modern smartphones, today we’re announcing Rooms, the latest app from Facebook Creative Labs. Rooms lets you create places for the things you’re into, and invite others who are into them too.[…]

Not only are rooms dedicated to whatever you want, room creators can also control almost everything else about them. Rooms is designed to be a flexible, creative tool. You can change the text and emoji on your like button, add a cover photo and dominant colors, create custom “pinned” messages, customize member permissions, and even set whether or not people can link to your content on the web. In the future, we’ll continue to add more customizable features and ways to tweak your room. The Rooms team is committed to building tools that let you create your perfect place. Our job is to empower you.

My guess is Rooms is a more strategic move to try and attract teens who seek privacy in apps like Snapchat. Seems like a good place for a clique.

When Women Stopped Coding  

Steve Henn for NPR:

A lot of computing pioneers — the people who programmed the first digital computers — were women. And for decades, the number of women studying computer science was growing faster than the number of men. But in 1984, something changed. The percentage of women in computer science flattened, and then plunged, even as the share of women in other technical and professional fields kept rising.

What happened?

This is something Hopscotch is trying to change.

Swift for Beginners

Last week I published a little essay about Swift, imploring iOS developers to start learning Swift today:

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift.

In last week’s iOS Dev Weekly Dave Verwer astutely pointed out, however, this advice is really aimed at experienced iOS developers:

Jason talks mainly about experienced iOS developers in this article but I believe it’s a whole different argument for those who are just getting started.

The intersection of programming languages and learning programming happens to be precisely my line of work, so I thought I’d offer some advice for beginner iOS developers and Swift.

Should you learn Swift or Objective C?

For newcomers to the iOS development platform, I reckon the number one question asked is “Should I learn with Objective C or with Swift?” Contrary to what some may say (e.g., Big Nerd Ranch), I suggest you start learning with Swift first.

The main reason for this argument is cruft: Swift doesn’t have much and Objective C has a whole bunch. Swift’s syntax is much cleaner than Objective C which means a beginner won’t get bogged down with unnecessary details that would otherwise trip them up (e.g., header files, pointer syntax, etc.).

I used to teach a beginners iOS Development course and while most learners could grasp the core concepts easily, they were often tripped up by the implementation details of Objective C oozing out the seams. “Do I need a semicolon here? Why do I need to copy and paste this method declaration? Why don’t ints need a pointer star? Why do strings need an @ sign?” The list goes on.

When you’re learning a new platform and a new language, you have enough of an uphill battle without having to deal with the problems of a 1980s programming language.

In place of Objective C’s header files, importing, and declaring of methods, Swift has just one file with a single declaration and implementation of methods, with no need to import files within the same module. There goes all that complexity right out the window. In place of Objective C’s pointer syntax, in Swift both reference and value types use the same syntax.

Learning Xcode and Cocoa and iOS development all at once is a monumental task, but if you learn it with Swift first you’ll have a much easier time taking it all in.

Swift is ultimately a bigger language than Objective C, with features like advanced enums, Generics/Templates, tuples, operator overloading, etc. There is more Swift to learn but Cocoa was written in Objective C and it doesn’t make use of these features, so they’re not as essential for doing iOS development today. It’s likely that in the coming years Cocoa will adopt more Swift language features, so it’s still good to be familiar with them, but the fact is learning a core amount of Swift is much more straightforward than learning a core amount of Objective C.

A Note about Learning Programming with Swift

I should point out I’m not necessarily advocating for learning Swift as your first programming language, however, but instead suggesting if you’re a developer who’s new to iOS development, you should start with Swift.

If you’re new to programming, there are many better languages for learning, like Lisp, Logo, or Ruby to name just a few. You may very well be able to cut your teeth learning programming with Swift, but it’s not designed as a learning language and has a programming mental model of the “you are a programmer managing computer resources” kind.

Learning Objective C

You should start out learning iOS development with Swift, but once you become comfortable, you should learn Objective C too.

Objective C has been the programming language for iOS since its inception, so there’s lots of it out there in the real world, including books, blog posts, and other projects and frameworks. It’s important to know how to read and write Objective C, but the good news is once you’ve become decent with Swift, programming with Objective C isn’t much of a stretch.

Although their syntaxes differ in some superficial ways, the kind of code you write is largely the same between the two. -viewDidLoad and viewDidLoad() may be implemented in different syntaxes, but what you’re trying to accomplish is basically the same in either case.

The difficult part about learning Objective C after learning Swift, then, is not learning Cocoa and its concepts but instead the earlier mentioned syntactic salt that comes with the language. Because you already know a bit about view controllers and gesture recognizers, you’ll have a much easier time figuring out the oddities of a less modern syntax than you would have if you tried to learn them both at the same time. It’s much easier to adapt this way.

Learn Swift

Perhaps the biggest endorsement for learning Swift comes from Apple:

Swift is a successor to the C and Objective-C languages.

It doesn’t get much clearer than that.

Patterns to Help You Destroy Massive View Controller  

Soroush Khanlou:

View controllers become gargantuan because they’re doing too many things. Keyboard management, user input, data transformation, view allocation — which of these is really the purview of the view controller? Which should be delegated to other objects? In this post, we’ll explore isolating each of these responsiblities into its own object. This will help us sequester bits of complex code, and make our code more readable.

Hopscotch and Mental Models

On May 8 2014, after many long months of work, we finally shipped Hopscotch 2.0, which was a major redesign of the app. Hopscotch is an interactive programming environment on the iPad for kids 8 and up, and while the dedicated learners used our 1.0 with great success, we wanted to make Hopscotch more accessible for more kids who may have otherwise struggled. Early on, I pushed for a rethinking of the mental model we wanted to present our programmers so they could better grasp the concept. While I pushed some of the core ideas, this was of course a complete team effort. Every. Single. Member. of our (admittedly small!) team contributed a great deal over many long discussions and long days building the app.

What follows is an examination of mental models, and the models used in various versions of Hopscotch.

Mental models

The human brain is 100000 year old hardware we’re relatively stuck with. Applications are software created by 100000 year old hardware. I don’t know which is scarier, but I do know it’s a lot easier to adapt the software than it is to adapt the hardware. A mental model is how you adapt your software to the human brain.

A mental model is a device (in the “literary device” sense of the word) you design for humans to use, knowingly or not, to better grasp concepts and accomplish goals with your software. Mental models work not by making the human “play computer” but by making the computer “play human,” thus giving the person a conceptual framework to think in while using the software.

The programming language Logo uses the mental model of the Turtle. When children program the Turtle to move in a circle, they teach it in terms of how they would move in a circle (“take a step, turn a bit, over and over until you make a whole circle”) (straight up just read Mindstorms).

There are varying degrees of success in a program’s mental model, usually correlating to the amount of thought the designers put into the model itself. A successful mental model results in the person having a strong connection with the software, where a weak mental model leaves people confused. The model of hierarchical file systems (e.g., “files and folders”) has long been a source of consternation for people because it forces them to think like a computer to locate information.

You may know your application’s mental model very intimately because you created it but most people will not be so fortunate when they start out. The easiest way to understand your application’s mental model is by having smaller leaps to make—for example, most iPhone apps are far more alike than they are different—so people they don’t have to tread too far into the unknown.

One of the more effective tricks we employ in graphical user interfaces is the spatial analogy. Views push and pop left and right on the screen, suggesting the application exists in a space extending beyond the bounds of the rectangle we stare at. Some applications offer a spatial analogy in terms of a zooming interface, like a Powers of Ten but for information (or “thought vectors in concept space”, to quote Engelbart) (see Jef Raskin’s The Humane Interface for a thorough discussion on ZUIs).

These spatial metaphors can be thought of as gestures in the Raskinian sense of the term (defined as “…an action that can be done automatically by the body as soon as the brain ‘gives the command’. So Cmd+Z is a gesture, as is typing the word ‘brain’”) where instead of acting, the digital space provides a common, habitual environment for performing actions. There is no Raskinian mode switch because the person already has familiarity with the space.

Hopscotch 1.x

Following in the footsteps of the Logo turtle, Hopscotch characters are programmed in the same egocentric mental model (here’s a video of programming one character in Hopscotch 1.0). If I want Bear to move in a circle, I first ponder how I would move in a circle and translate this to Hopscotch blocks. If this were all to the story, this mental model would be pretty sufficient. But Hopscotch projects can have multiple programmed characters executing at the same time. Logo’s model works well because it’s clear there is one turtle to one programmer, but when there are multiple characters to take care of, it’s conceptually more of a stretch to program them all this way.

Hopscotch 1.0 was split diametrically between the drag and drop code blocks for various the various characters in your project and the Stage, the area where your program executes and your characters wiggle their butts off, as directed. This division is quite similar to the “write code / execute program” model most programming environments provide developers, but that doesn’t mean it’s appropriate (for children or professionals). Though the characters were tangible (tappable!) on the Stage, they remained abstract in the code editor. Simply put, there wasn’t a strong connection between your code and your program. This discord made it very difficult for beginners to connect their code to their characters.

Hopscotch 2.x

In the redesign, we unified the Stage-as-player with the Stage-as-Editor. In the original version, you programmed your characters by switching between tabs of code, but in the redesign you see your characters as they appear on the Stage. Gone are the two distinct modes, instead you just have the Stage, which you can edit. This means you no longer position characters with a small graph, but instead pick up your characters and place them directly.

The code blocks, which used to live in the tabs now lives inside the characters themselves. This gives a stronger mental model of “My character knows how to draw a circle because I programmed her directly”. When you tap a character you see a list of “Rules” appear as thought bubbles beside the character. Rules are mini-programs for each character that are played for different events (e.g., “When the iPad is tilted, make Bear dance") and you edit their code by tapping into them. This concept attaches the abstract concept of “code” to the very spatial and tangible characters you’re trying to program, and we found beginners could grasp this concept much quicker than the original model.

Along the way, we added “little things” like custom functions and a mini code preview that highlights code blocks as it executes, to let programmers quickly see the results of their changes for the character they’re programming. These aren’t additions to the mental model per se, but they do help close the gap between abstract code and your characters following your program.

Our redesign got a lot of love. We were featured by Apple, Recode, and even the Grubers gave us some effusive love. It wasn’t perfect, and we’ve continued to work on the design in the intervening months, but it’s been a big improvement.

A mental framework

A strong mental model benefits the people using your software because it helps them meet both ways. But mental models also help you as a designer to understand the messages you send through your application. By rethinking our mental model for Hopscotch, we dramatically improved both how we build the program and how people use it, and it’s given us a framework to think in for the future. As you build or use applications, be aware of the signals you send and receive and it will help you understand the software better.

Swift

After playing with Swift in my spare time for most of the Summer and after now using Swift full time at Hopscotch for about a month now, I thought I’d share some of my thoughts on the language.

The number one question I hear from iOS developers when Swift comes up is “Should you switch to Swift?” and my answer for that is “Probably yes.” It’s of course not a black and white answer and depends on your situation, but if you’re an experienced Objective C programmer, now is a great time to start working in Swift. I would suggest switching to it full time for all new iOS work (I wouldn’t recommend going back and re-writing your old Objective C code, but maybe replace bits and pieces of it as you see fit).

(If you’re a beginner to iOS development, see my thoughts on Swift for Beginners)

Idiomatic Swift

One reason I hear for developers wanting to hold off is “Swift is so new there aren’t really accepted idioms or best practices yet, so I’m going to wait a year or two for those to emerge.” I think that’s a fair argument, but I’d argue it’s better for you to jump in and invent them now instead of waiting for somebody else to do it.

I’m pretty sure when I look back on my Swift code a year from now I’ll cringe from embarrassment, but I’d rather be figuring it all out now, I’d rather be helping to establish what Good Swift looks like than just see what gets handed down. The conventions around the language are malleable right now because nothing has been established as Good Swift yet. It’s going to be a lot harder to influence Good Swift a year or two from now.

And the sooner you become productive in Swift, the sooner you’ll find areas where it can be improved. Like the young Swift conventions, Swift itself is a young language—the earlier in its life you file Radars and suggest improvements to the language, the more likely those improvements will be made. Three years from now Swift The Language is going to be a lot less likely to change compared to today. Your early radars today will have enormous effects on Swift in the future.

Swift Learning Curve

Another reason I hear about not wanting to learn Swift today is not wanting to take a major productivity hit while learning the language. From my experience, as an experienced iOS developer is you’ll be up to speed in a week or two with Swift, and then you’ll get all the benefits of Swift (even just not having header files or having to import files all over the place makes programming in Swift so much nicer than Objective C, you might not want to go back).

In that week or two when you’re a little slower at programming than you are with Objective C, you’ll still be pretty productive anyway. You certainly won’t become an expert in Swift (because nobody except maybe Chris Lattner is yet anyway!) right away, but you’ll be writing arguably cleaner code, and, you might even have some fun doing it.

I’ve been keeping a list of tips I’ve picked up while learning Swift so far, like how the compatibility with Objective C works, and how to do certain things (like weak capture lists for closures). If you have any suggestions of your own, feel free to send me a Pull Request.

Grab Bag of Various Caveats

  • I don’t fully understand initializers in Swift yet, but I kind of hate them. I get the theory behind them, that everything strictly must be initialized, but in practice this super sucks. This solves a problem I don’t think anybody really had.

  • Compile times for large projects suck. They’re really slow, because (I think) any change in any Swift file causes all the Swift files to be recompiled on build. My hunch is by breaking your project up into smaller Modules (aka Frameworks) this should relieve the slow build times. I haven’t tried this yet.

  • The Swift debugger feels pretty non-functional most of the time. I’m glad we have Playgrounds to test out algorithms and such, but unfortunately I’ve had to mainly resort to pooping out println()s of debug values.

  • What the hell is with the name println()? Would it have killed them to actually spell out the word printLine()? Do the language designers know about autocomplete?

  • The “implicitly unwrapped optional operator” (!) should really either be called the “subversion operator” or the “crash operator.” The major drumbeat we’ve been told about Swift is it’s supposed to not allow you to do unsafe things, hence (among other things) we have Optionals. By using the implicitly unwrapping the optional, we’re telling the compiler “I know better than you right now, so I’m just going to go ahead and subvert the rules and pretend this thing isn’t nil, because I know it’s not.” When you do this, you’re either going to be correct, in which case Swift was wrong for thinking a value was maybe going to be nil when it isn’t; or you’re going to be incorrect and cause your application to crash.

Objective Next

Earlier this year, before Swift was announced, I published an essay, Objective Next which discussed replacing Objective C, both in terms of the language itself and, more importantly, what we should thirst for in a successor:

We don’t need a better Objective C; we need a better way to make software. We do that in two steps: figure out what we’re trying to accomplish, and then figure out how to accomplish it. It’s simple, but nearly every post about replacing Objective C completely ignores these two steps.

In short, a replacement for Objective C that just offers a slimmed down syntax isn’t really a real victory at all. It’s simply a new-old thing. It’s a new way to accomplish the exact same kind of software. In a followup essay, I wrote:

If there was one underlying theme of the essay, it was “Don’t be trapped by Instrumental Thinking”, that particularly insidious kind of thinking that plagues us all (myself included) to thinking about new ideas or technologies only in terms of what we’re currently doing. That is, we often can only see or ask for a new tool to benefit exactly the same job we’re currently doing, where instead we should consider new kinds of things it might enable us to do. […]

When talking in terms of Objective C development, I don’t mean “I’m dreaming of a replacement that’ll just let you create the exact same identical apps, it’ll just have fewer warts,” but I instead mean I’m dreaming of a new, fundamental way to approach building software, that will result in apps richer in the things we care about, like visual and graphic design, usability and interaction, polish, and yes, offer enhancements to the undercarriage, too.

This, unfortunately, is exactly what we got with Swift. Swift is a better way to create the exact same kind of software we’ve been making with Objective C. It may crash a little less, but it’s still going to work exactly the same way. And in fact, because Swift is far more static than Objective C, we might even be a little bit more limited in terms of what we can do. For example, as quoted in a recent link:

The quote in this post’s title [“It’s a Coup”], from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

I still think Swift is a great language and you should use it, but I do find it lamentably not forward-thinking. The intentional lack of a garbage collector really sealed the deal for me. Swift isn’t a new language; it’s C++++. I am glad to get to program in it, and I think the more people using it today, the better it will be tomorrow.

Belief

I’ve had this thought stuck in my head for a few months about Beliefs I thought might be useful to share. The thought goes something like this:

A belief about something is scaffolding we should use until we’ve learned more truths about that something.

First, I should point out I don’t think this statement is necessarily entirely true (though it could be), but I do think it’s a useful starting point for a discussion. Second, I also don’t think this view on belief is widely practiced, but I do think it would make for more productive use of beliefs themselves.

We humans tend to be a very belief-based bunch. There are the obvious beliefs like religion and other similar deifications (“What would our forefathers think?”) but we hold strong beliefs all the time without even realizing it.

The public education systems in North America (as I experienced firsthand in Canada and as I’ve read about in America) are based on students believing and internalizing a finite set of “truths” (this is known as a curriculum) and taking precisely those beliefs as granted.

Science presents perhaps the best evidence we’re largely a belief-based species as science exists to seek truths our beliefs are not adapted to explaining. Before the invention of science, we relied on our beliefs to make sense of the world as best we could, but beliefs painted a blurry, monochromatic picture at best. Science is hard because it has to be hard—its job is to adapt parts of the universe which we can’t intuit into something we can place in our concept of reality—but it does a much superior job at explaining reality than our beliefs do.

A friend of mine recently told me “I have beliefs about the world just like everybody else…I just don’t trust them, is all” I think that’s a productive way to think about beliefs. It would probably be impossible to rid the world of belief, but I think a better approach is to acknowledge and understand belief as a useful, temporary tool. We should teach people to think about belief as a useful means to an end, as a support system, until more is learned about something. Most importantly, we should teach that beliefs should have a shelf-life, and not be permanently trusted.

The Future Programming Manifesto  

Jonathan Edwards:

Most of the problems of software arise because it is too complicated for humans to handle. We believe that much of this complexity is unnecessary and indeed self-inflicted. We seek to radically simplify software and programming. […]

We should measure complexity as the cumulative cognitive effort to learn a technology from novice all the way to expert. One simple surrogate measure is the size of the documentation. This approach conflicts with the common tendency to consider only the efficiency of experts. Expert efficiency is hopelessly confounded by training and selection biases, and often justifies making it harder to become an expert. We are skeptical of “expressive power” and “terseness”, which are often code words for making things more mathematical and abstract. Abstraction is a two-edged sword.

It’s a Coup  

Michael Tsai:

The quote in this post’s title, from Andrew Pontious, refers to the general lack of outrage over the loss of dynamism. In broad strokes, the C++ people have asserted their vision that the future will be static, and the reaction from the Objective-C crowd has been apathy. Apple hasn’t even really tried to make a case for why this U-turn is a good idea, and yet the majority seems to have rolled over and accepted it, anyway.

Questions!

Why don’t more people question things? What does it mean to question things? What kinds of things do we need to question? What kinds of answers do we hope to find from those questions? What sort of questions are we capable of answering? How do we answer the rest of the questions? Would it help if more people read books? Why does my generation, self included, insist of not reading books? Why do we insist on watching so much TV? Why do we insist on spending so much time on Twitter or Facebook? Why do I care so much how many likes or favs a picture or a post gets? What does it say about a society driven by that? Why are we so obsessed with amusing ourselves to death? Why are there so many damn photo sharing websites and todo applications? Is anybody even reading this? How do we make the world a better place? What does it mean to make the world a better place? Why do we think technology is the only way to accomplish this? Why are some people against technology? Do these people have good reasons for what they believe? Are we certain our reasons are better? Can we even know that for sure? What does it mean to know something for sure? Do computers cause more problems than they solve? Will the world be a better place if everyone learns to program? If we teach all the homeless people Javascript will they stop being homeless? What about the sexists and the racists and the fascists and the homophobes? Who else can help? How do we get all these people to work together? How do we teach them? How can we let people learn in better ways? How can we convince people to let go of their strategies adapted for the past and instead focus on the future? Why are there so many answers to the wrong questions?