I have wanted to participate in NaNoWriMo for close to a decade―basically ever since I heard about its existence―but I was overwhelmed with my graduate studies and other things (excuses, excuses). Well, I finished graduate school and rid myself of all excuses. Tonight I successfully completed my first NaNoWriMo. Don’t ask to see the fruits of this labor just yet. Like most first drafts, it is in a pretty unfortunate state at the moment. That said, I’m really excited to have the raw material out of my head and on “paper” as much of what I plan to work on in the coming year will derive from this first draft in some way.
More to come.
Also, note my wonderful procrastination in all of its glory in the chart below.
I’ve been thinking about process lately. Okay, I’ve honestly been thinking about process for years. I’ve figured out pieces of my own process over the years through trial and error, reading about other processes, and dumb luck. What I’m most interested in now is how to increase the likelihood of using my own process to the extent that it is identified at the present moment.
It seems that the best way to improve the use of my own best―at the moment―process will be removing the barriers that hinder continuing or discourage even starting. I’ve identified the primary barrier as the mess of a work area I’ve used through the birth of two children and a PhD[1. if you’re interested you can read my dissertation here]. It’s a tick above freezing in the winter and a constant mess. Home sweet home. The image above is a real shot of my desk without any pre-photo cleaning (promise). What is missing is the disarray around and beneath and before the desk. I grow anxious just walking into the space.
What is surprising is how much and how little work I’ve put into the arrangement seen above. The 2×4’s serve the dual-purpose of raising the monitor to eye-level and providing a handy space for the keyboard to reside when desk- space is at a premium. Unfortunately I rarely use that latter, pre-planned feature. The keyboard is central to my workflow even when using the Wacom tablet to create or edit pixels and vectors. The hard drive to the right of the under-used notebooks is meant for backups but mostly holds older copies of things I already have newer live copies of (or, worse still, holds unnecessary copies of copies).
It turns out that I’m a little afraid to even go through the work of cleaning off the hard drive for fear of getting lost in what has become a truly ubiquitous time-capsule of everything (instead of just what was deemed to be most important). Fear, in fact, motivates much of my trepidation approaching the cleaning of the desk and the surrounding areas. But fear of what?
Time is what I most fear losing. The reframe is simple: I lose time anyway. I wish just saying that you lose time anyway was more motivating. Alas, the human mind is not always rational. Such is life. I find it difficult to get certain things done for fear of losing the time spent getting them done. I might have done something more productive than the thing I’m confident would help most in the future. Again, brains are weird. My poor brain doesn’t want to lose time (that it will “lose” anyway).
I know this is imprudent. That’s the whole reason I’m writing about it. I’m spending unnecessary time on something less productive to fully explore how productive just doing things can be when you just start. And this is a key factor I’ve discovered over the years about myself―
I’m more productive when I’m less efficient.
This sounds either unbelievably stupid or oddly profound. I wish it was wrong. I wish that I could endlessly be driven by efficiency improvements iterated over a lifetime. No matter―the slow way is the productive and efficient way for me. And slow involves some uncomfortable (for me) friends―mainly paper.
I desperately want to like all digital technologies, but I like what I like despite all efforts so far. Paper is the main friend that I’m embarrassed to profit from greatly. Paper―despite what I and many others think―is a technology. It’s hard to think of it as such since it’s not battery-powered. This lack of battery is a feature, not a bug. Recognizing the technology aspect of paper is an important reframe for me as it places paper on the same level as other more interesting and distracting technologies. And this is the core struggle because I’ve known forever that paper makes me more productive (while seemingly less efficient). I hate taking the non-digital step in an ultimately digital process. The problem is that if I don’t take the non-digital paper step I don’t get anything done.
Here’s a drawing I did on an index card a while ago.
Listen, I wish I was a better artist but I take that drawing and scan it.
Then I vectorize the image I’ve scanned.
And then I color it and place it back on an index-card-shaped white rectangle floating above the void.
That’s a process that I’m using now. I’m trying to own it as the process I use and not focus on the myriad ways I could get lost trying to improve the speed of the process. It’s like that xkcd comic about time―is it worth it to automate?
Where is the heart? Often we describe it as being on the canvas, in the lilt of a voice, or left on the stage. But where is our own heart?
We attribute superhuman qualities to creative people. We construct chasms between their accomplishments and our abilities. We manufacture fear, uncertainty, and doubt about our own efforts while marveling at the perfection of those we admire from afar. Our proximity to our own work reveals what we refuse to believe is true for others―that the work is messy, hard, and confusing.
Worse, we often focus on external factors to explain our own lack of effort or non-extant works. We focus on tips, tricks, tools, and hacks and in so doing allow these mental diversions to distract us from our goals. We do this even though we know that we have the tools to start working right now. The effort to begin would be minimal or unnecessary—yet we still delay.
We know we’re stalling for a reason but we can’t quite articulate it (even to ourselves). If only we stopped the introspection and started any action. What would happen then? We know that the point is to start, to focus, to strive, to capture, to evaluate, and to share. Why is this so difficult?
start with heart
Start with heart. Start caring deeply about the things you seek to create. A great deal of effort is required to bring something new into being. Spend time focusing on the things you love. Then focus that love and craft it into something that tries―but ultimately fails—to capture that specific and incommunicable love. The beauty is in the striving.
Heart can be the objective of your work and the fire that provokes your motivation to capture it.
I have a goal of starting with heart. It may not be easy. It may feel impossible. It may be silly to others. No matter. I will love what I do. I will start with heart.
Years ago I funded a wonderful project. We raised money to pay musicians to play classical music scores so that we could record them and release them immediately into the public domain. That effort was a success.
Now, I’m finally able to share the fruits of the second project to do the same with the complete works of Frédéric Chopin. This is a wonderful collection. Please listen here. Please download. Please share.
This is my vote for best post of 2014. What a fascinating look at structure via data analysis. The entire article is such a refreshing surprise. It explores the structural arcs in TV and Movie scripts across screen time (by breaking episodes into 6 or 12 even chunks) and then creates a single, multidimentional, visual graph of each show or movie’s movement through those topics across individual screen time. This sort of confirms Aristotle’s dominance in popular storytelling.
What is this saying? That in the grand corpus of tens of thousands of hours of studio-approved, investor-funded, union-written scripts, two major trends stand out: one set of directional trends, advancing continuously through the course of the film, and one cyclical, through which the language returns back to its origins.
That outcome is to be expected, though it is interesting to see the data produce such conclusive evidence directly from a scriptural level of word clusters. There is a new twist, however, that makes this research particularly interesting:
But although [each individual show] trace[s] out arcs, they do it in their portion of the plot arc space ... The portions of plot-arc space they land in correspond to genre: the crime shows live in an area something like the early middle of a show, while science fiction camps out after the end of the end. … So that clustering is interesting enough: but the omnipresence of the curves suggests that they all follow the same path through space in some way, regardless of where they start
This graph is a wonderfully welcome visual analysis of plot structure that adds to my understanding of how traditional structure functions. I wonder how one would modify this for use in dramatic scripts, particularly across languages and time periods. Where, for instance, would the absurdists lie on the chart using this sort of analysis. It is regarded as a genre but it’s defining features are not typically understood to be topical but structural. Circular plot structure—a hallmark of absurdism—is understood to end where it began, but where does it go? I’ve often heard Beckett’s Godot described as “nothing happens,” but that is not a fair assessment of the script or production, it illuminates how strongly we expect Aristotelian structure. And what of postmodernism? Are there any defining topical features there? Are there strains of postmodernism? Is topical-textual analysis the best way of evaluating those scripts? Are the scripts the element that makes the production postmodern?
Dr. Schmidt’s post made me smile. It provokes so many new questions. This type of research is extremely interesting. Now go and read!
I hope artists will pause and realize that misplaced blame and oversimplification of the issues could set us back. Physical album sales are not the long-term solution (case in point: the laptop I’m typing on doesn’t have a CD drive)…
The doorbell rings. Young daughter answers. Nobody is there. She looks down. There’s a package. From Amazon . . .
Be sure to make the video on the left fullscreen after clicking this link. Take note if the simple, aural addition of strings is enough to evoke an emotional response in you. How scary is this on a scale from 1 to 10 (where 10 is terrifying)?
I’m excited to find out about this book. I wasn’t aware that Cory was working on something non-fiction. The second quote below defines the internet as “one great big copy machine” which is amusingly accurate. I had the opportunity to ask an interviewee for a one-word definition of the internet. Her response was the word “open” followed by a string of warm musings about sharing and connecting directly with others to exchange everything from ideas to art. “Copies” wouldn’t be a bad definition either, though I sincerely hope we can re-position that term as a positive one.
The first quote from Palmer and Gaiman is a belief that is widely shared on the web. I wonder if it is a belief or a truth, because these industries still exist while producing and distributing content. It just feels like their business model has shifted away from the exchange of content and money when they fought so hard against the web. Things like Patreon are fascinating examples of alternative means to make the exchange more meaningful and direct.
“We are a new generation of artists, makers, supporters, and consumers who believe that the old system through which we exchanged content and money is dead. Not dying: dead.”
Instead, the author advocates for a liberalized system of copyright laws that finally admits that the Internet, for all its virtues and diverse purposes, is nothing but one great big copy machine, and it’s not going away.
Had an idea for a thing. At the moment I’m calling it <broken poets>.
Clark was having one of his moments.
"There were roughly three thousand people using public transport that day; thirty-four where riding on the bus in question; eighteen were Caucasian, twelve were Hispanic, two were black, two were Asian; nine were wearing hats: three where dark, six were light; one of the Hispanic women was wearing gloves – it wasn't cold that day; three of the Caucasian women were wearing winter coats – it was not cold that day; and one of the Caucasian men noticed this disparity in dress before the crash that killed thirty-three of them – I should not have questioned them about their seasonally inappropriate attire so close to the end of their mortal lives."
The department store worker asked again, "do you need this gift wrapped, sir?"
"That would be nice. I could give it to someone if I chose to. It's always good to have your options open. You have a very nice choice of attire that is seasonally appropriate."
"Thank you. Just one final ribbon –"
"Tremendous! I can tell you're going to live for a long time. God doesn't take the talented or well-dressed."
This article pointed me toward the book Superintelligence which pointed me toward a quote from a shorter document by the author from which I’ve quoted below. Anyone wondering what to gift me this Christmas can look into the linked book above. I enjoy being terrified by other people’s thoughts.
Two things are striking from the below quote:
The future of humanity could be decided by algorithms—iterated through countless other machine-iterated algorithms—beginning with something being coded today (hopefully without any bugs or typos).
The implication that superintelligence would eradicate human invention.
The first point is terrifying. I’d like to believe that if such a superintelligence is brought forth it would be smart enough to fix any bugs or major design flaws in the original. Of course I assume that what a superintelligence wants and what mere human intelligences want will differ in profound ways. What then?
The second point is—I believe—wrong (assuming we’re using the word ‘invention’ similarly). Unless humanity has been exterminated by this superintelligence then invention will not cease. The more fictional forms of invention (e.g. art) should flourish. I strongly believe that humans are a necessary component in art. Creation, reception, critique, categorization, and other components require human beings.
Superintelligence, if/when it materializes, will spur a Renaissance in human artistic production.
<that’s what I think anyway>
Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self‐replicating space‐colonizing robotic probes, and more. It would also be super‐effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.
It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanityʹs future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AIʹs goal architecture) in such a way as to make it ʺhuman‐friendlyʺ — in the best possible interpretation of that term.