commentary and current events

I’m Obsessed With Tascha’s Destroyed Diamond

But not for the reasons you think.

About a year ago, this tweet happened:

An embedded tweet from @TaschaLabs, reading “If you make a NFT of a real diamond, and the diamond itself gets destroyed in a fire tomorrow, you still have the same asset. Because the token still exists and is in limited supply just as before. Nothing has changed. What NFT is doing to the concept of asset, few understand.”

This tweet got parodied 11 months later:

An embedded tweet from @JUN|PER with a screenshot of the original @TachaLabs tweet and the comment “if you buy a donut and get a receipt, and the donut itself gets stolen and eaten, you still have the same asset because the receipt still exists, nothing has changed.”

The parody version got traction by being funny, but it’s not a perfect analogy. And the ways in which the analogy doesn’t line up with the original fascinate me.

First: An NFT isn’t (always) a receipt.

A non-fungible token or NFT is a unique digital identifier. Think of it like a VIN, but existing solely as digital information (i.e. it’s not etched into anything – although I suppose it could be).

Early successes in making money from NFTs usually connected the digital identifier to some type of artwork, whether physical (paint on canvas) or digital (JPEGs of anthropomorphized monkeys). These connections make it easier for ordinary folks to think of NFT ownership as akin to art ownership, or at least to receipt ownership. Maybe I can’t take apart a 50-foot mural painted on the side of City Hall and move it to my own apartment, but I can own a digital identifier that indicates I am connected to that artwork.

It is possible to use NFTs as receipts. For instance, if you were really into my bad MS Paint drawings of 90s cartoon characters, you could purchase one from me, and I could send you an NFT connected to that artwork as proof that you gave me money in exchange for the artwork.

The fact that NFTs can be used as receipts is why the donut analogy makes sense.

Yet – here’s where it gets weird – NFT ownership is not automatically the same thing as item ownership.

To put it in donut terms, NFTs create a world where it’s possible to buy a donut receipt, but never actually own a donut. What you own is a donut receipt. The receipt doesn’t prove you exchanged money for a donut; the receipt is what you received in exchange for your money.

(This conjures up an Inception-like universe of receipt receipts, and receipt receipt receipts, and so on, but we’ll let that eldritch horror lie.)

And Then There Was IP Law

To make this ownership problem more complex, NFTs are commonly attached to creative works: Visual artworks, music, and so on. Put another way, NFTs are commonly attached to items that fall under copyright law.

And in the copyright world, owning the item is not the same thing as owning the underlying rights.

A group of crypto types calling themselves the Spice DAO presumably learned this the hard way when they pooled their funds to purchase a rare copy of a book created for a never-produced screen version of Dune. The Spice DAO then started discussing what they’d do with the book, floating the idea of actually making the version of Dune sketched out in it.

Apparently, no one had ever told them that owning a physical copy of a book doesn’t mean you own the rights to the intellectual property it contains. (Otherwise, everyone who ever bought a copy of Harry Potter would be a multimillionaire.)

So: Owning an NFT doesn’t mean you own any physical referent object in the real world. It doesn’t mean you own any rights at all vis a vis any physical referent object or its intellectual property contents. It only means you own a unique digital identifier.

Enter the Destroyed Diamond

@TaschaLabs does, at least, appear to grasp that when you own an NFT, what you own is a unique digital identifier, not the underlying object.

In fact, that’s the entire point behind Tascha’s Destroyed Diamond.

Tascha’s Destroyed Diamond is an NFT that signifies pretty much exactly what the title implies. It is a diamond, belonging to Tascha, which Tascha intended to destroy – and apparently succeeded in powdering, if not actually vaporizing.

If I’m reading the tweets correctly, the goal of Tascha’s Destroyed Diamond was to demonstrate that the value of the diamond can be ported, or transferred, or symbolized, by the NFT – the digital identifier. The underlying theory is that because the digital tag is unique, it retains value even if the physical referent (the diamond) doesn’t even exist.

Here’s Why I’m Obsessed

My obsession with Tascha’s Destroyed Diamond boils down to three points:

It verified in obvious terms that buying an NFT is not the same thing as buying its referent.

As I noted above, it could be the same thing. You could buy a diamond and its NFT together, for instance. But buying an NFT doesn’t automatically confer ownership rights in its referent. Buying a donut receipt doesn’t guarantee you get a donut.

After all, it’s still Tascha’s Destroyed Diamond.

If buying an NFT isn’t the same thing as buying its referent, NFT bros are sleeping on major untapped sources of revenue.

For example: If I can buy the NFT of Tascha’s Destroyed Diamond (I can), and if that NFT doesn’t lose value whether or not Tascha’s Destroyed Diamond actually exists in any meaningful sense (suspend your disbelief for a second), then what is stopping me from creating or buying NFTs of other items that also do not currently exist?

There’s no reason time should be a limiting factor. An NFT of the Library of Alexandria – another valuable thing that once existed but has since been destroyed (in an actual fire this time) – should be not only feasible, but staggeringly valuable.

Yet I doubt it will be, because:

NFTs depend on buyers not understanding the first two points.

The original tweet claims that “What NFT is doing to the concept of asset, few understand.” So let me clear it up a bit:

NFTs are a market for unique digital identifiers. That’s it. NFTs are like if your friend sent you a list of randomly-generated numbers via Google Docs, and you sold each of those numbers. “They’re valuable because each number is unique!” you tell all your friends. “Buy one now! Nobody will ever have your exact same number!”

“What can I do with these numbers?” your friends ask. “Should I turn them in to the lottery commission to claim a prize? Do they prove I own a car? Can I use them for identity theft? If I put them all in my auto-dialer, can I run a telemarketing scam getting people to donate $1 to me today so they feel happier?”

“No,” you explain. “You just own a string of numbers in this Google Doc. But they are unique!”

…It’s pretty clear why NFTs started having real-world referents fairly quickly.

By the way, the existence of a market for unique digital identifiers doesn’t fundamentally change the concept of an asset. Tascha’s Destroyed Diamond seeks to make clear that NFTs have value separately from any real-world referent. But scarcity or real-world referents are not where value comes from.

Like every other item in commerce, NFTs derive their value from demand. Demand is driven by a sense of utility. We exchange money for things because we believe the thing will provide us proportional utility. (I use “utility” broadly to include any sense of being better off, including aesthetic or emotional.)

Book collectors understand this. While book scouts and dealers in rare books do swap price estimates, when pressed they will admit that the actual value of a book is only what someone is willing to pay for it. In other words, the value of used and rare books depends on demand.

For some people, bragging rights and a sense of being “in” on something are high-utility items. NFTs appeal to this crowd, and they’ll continue to do so for some time.

But buying a receipt and buying a donut are not the same thing. If you want to own a real-world referent, buy the referent. If you want to own a digital identifier whose existence depends on technology that already eats more energy than the annual expenditures of Denmark, buy an NFT.

Standard
the creative process

My Favorite Creativity Tools Online, Not Ranked

I like to make things. They don’t even have to be good things. In fact, I’m often happiest when I’m churning out piles of terrible art.

Here are some of my favorite online creativity rabbit holes to fall into. Each of these is free unless otherwise noted.

creativity

Botnik

If you’re unfamiliar with my love affair with Botnik‘s predictive-text keyboard, it’s because you’re new here. (Welcome!)

Basically, Botnik is your phone’s predictive text on a much larger scale and with a potentially much larger dataset that isn’t confined to things you text most often. Botnik has dozens of pre-loaded keyboard options ranging from “John Keats” to Radiohead lyrics or The Joy of Cooking. You can also feed it your own text banks as UTF-8 encoded .txt files.

For examples of the fun nonsense you can make with Botnik, check out this list of predictive-text New Year’s resolutions or this predictive-text history of Mother’s Day.

Noteflight (freemium)

If you’ve ever wanted to write music but (a) don’t know if you can, (b) don’t play an instrument and/or (c) hate having to draw all those little dots on manuscript paper which you (d) don’t even own anyway, Noteflight is the obsession for you.

Noteflight is web-based music notation software, which does what it says on the tin: It allows you to write music. Also to play it back immediately, change/edit instruments and voices, and so on.

Check out examples of music I wrote in Noteflight in the Bad Carols series.

The full version requires a subscription, but if you’re new to writing music you can get a lot of mileage out of the free version before you make the switch.

As an occasional music teacher, I especially appreciate features of Noteflight that are annoying af at first, like its insistence on subdividing measures for you. It’s really helpful if you’re not already 100 percent comfortable with the concept of how many beats go in a measure and what that should look like.

Soundation (freemium)

If you want to write music but the previous paragraph’s mention of “subdividing measures” made your eyes cross, try Soundation. It allows you to create music mixer-style, by stacking, looping and editing tracks.

Again, you can pay for a subscription or not, but the free version lets you do quite a bit before you decide whether or not an upgrade is worth your money.

I’ve found that Soundation is highly accessible for middle schoolers and older, whether or not they have any kind of previous music-related education. So put some headphones on your kids and let them mix away.

Scratch

Scratch is an MIT project designed to teach kids how to code, but even as an adult in my 30s I find it’s a lot of fun to put together my own animations and games.

The interface is very user-friendly and intuitive. If you’re super intimidated by anything with the word “coding” in it, though, there’s also a series of tutorials that will walk you through every aspect of Scratch.

Scratch is the kind of thing I would have killed for when I was ten years old and programming my Apple IIGS in BASIC that I learned out of my fifth-grade math textbook. It’s a lot of fun and a great way to make goofy things to share with friends.

Canva (freemium)

Canva is a graphic design tool for those of us with zero graphic design chops.  It offers hundreds, maybe thousands, of templates for social media images, blog post headers, invitations and a bunch of other things.

I use it primarily to make graphics for this blog, but there are plenty of other options for Canva use. Many of the templates and images are free, but some of them require payment or a Canva subscription to access – though I’m not sure you need one if you’re only using Canva for amusement.

An alternative to Canva is Snappa, which does basically the same things except with an arguably more intuitive (and definitely more touchscreen-friendly) interface. Canva’s one major failing is that it’s not optimized for touchscreens, so if you’re creating on a tablet, consider giving Snappa a go.

Micro Marching League (freemium)

Micro Marching League might be in the running for most nerdy niche option on this list. It’s basically Pyware but for kids.

…If you’re thinking “there’s no way you can make Pyware kid-friendly,” you’re right.

Micro Marching League (MML) allows you to design your own marching band drill and watch it play out…more or less effectively? It’s not a tool I’d actually use to write drill I would actually put on the field or floor, but it’s a fun introduction to drill-writing for anyone who hasn’t actually tried it before.

The free version offers enough scope to get started. You can pay for options like inserting your own uniform colors or creating indoor drill, but if you’re that serious about writing drill I’d recommend just switching to Pyware.

Master MML and your learning curve for Pyware won’t be any shorter, but at least you’ll have some idea what you want certain forms to look like.

Seventh Sanctum

I’ve been messing around on Seventh Sanctum since…years? The site’s biggest draw for creatives is probably its massive collection of idea generators, from sci-fi plots to made-up 1980s cartoon heroes.

It also offers a huge list of resources for creatives, including stock photography sources, publishing outlets, online portfolio hosting sites and so on. It’s a decades-old standby of the creative community, but I’ve included it here in case you’re one of today’s lucky 10,000.

Springhole

Like Seventh Sanctum, Springhole is also (a) ancient (in Internet terms) and (b) full of creativity resources. Springhole, however, is geared almost entirely at writers.

In addition to various generators, you’ll also find a wealth of writing advice, from how to know when you should write a novel to how to determine whether your main character is actually a “Mary Sue” or just being called that by disgruntled dudebros who haven’t realized that “being female” is a default state for half the population.

I used to get lost in Springhole for hours on end. It’s still one of my favorite online rabbit holes.

Zompist

Zompist is Mark Rosenfelder’s personal website, and it’s absolutely fantastic if you’re into any kind of worldbuilding or conlanging.

Rosenfelder is the author of several books on how to construct conlangs (which I also recommend). The website both provides an introduction to those books and hosts many of the in-depth examples that didn’t fit on the physical pages.

If you’ve ever wanted to build your own fantasy world/language, or having built one you now have no idea what to do with it, there’s plenty here to keep you busy. It’s where I found the format for this Niralan culture test, for example.

Lexiconga

Lexiconga is the other Extremely Niche tool on this list. It’s a dictionary compiler, which means it’s probably most useful to folks who are already in the vocabulary-building phase of conlanging.

There’s certainly nothing wrong with using an Excel spreadsheet for vocabulary purposes. I do. But I appreciate the way Lexiconga is built to manage your word hoard. My Excel sheet for Niralanes, for instance, has nearly a thousand entries currently. That’s a lot of scrolling, and it’s scrolling Lexiconga doesn’t make me do.

One of my favorite parts of social distancing/sheltering in place so far is that I feel even freer than usual to spend hours making terrible art (like this abomination my brain woke me up at 4 am to write). I cannot encourage it enough. Making terrible art is how we make good art (eventually) – but more importantly, it’s just plain fun.

Go forth and make terrible art. ❤


Support an artist: buy me a coffee or share this post with all your online friends.

 

 

Standard
social media image with title of blog post
the creative process, writing

Creativity by Markov Chain, or Why Predictive Text Isn’t the Novel-Writing Shortcut You’re Looking For

Over the past year, I’ve played with Botnik‘s predictive text generator to create everything from alternative histories of popular holidays to terrible Christmas carol lyrics to the median New Year’s resolutions. It’s fun, it’s silly, and it is far more labor-intensive than most people imagine computer-generated texts would be.

Most of the conversations I see around AI and text generation assume that writers are going to be put out of business shortly. They assume that AI can not only generate text but generate it well, without human intervention.

These assumptions are…a bit overdone.

Here’s why predictive-text novels won’t be the next big trend in literature.

social media image with title of blog post

What’s a Markov Chain?

Predictive text is typically powered by a Markov chain, an algorithm that tracks a set of defined “states” and determines the probability of jumping to the next state from a current position in any one state.

For instance, if you wanted to create a super-simple Markov chain model of a writer’s behavior, “writing” might be one state and “not-writing” might be another. (This list of possible states is called a “state space.”) At any given time, the writer is either “writing” or “not-writing.”

There are four possible transitions between “writing” and “not-writing”:

  1. writing to writing (“must finish paragraph!”),
  2. writing to not-writing (“what’s on Netflix?”),
  3. not-writing to writing (“once…upon…a…time”), and
  4. not-writing to not-writing (“why yes, I WILL binge all of The Witcher, thanks”).

Thus, the probability of making a transition from any state to any other state is 0.5 (here’s a visual representation). At least at the beginning.

Markov chains also have a limited ability to learn from data inputs. For instance, one could program a two-state Markov chain to predict whether you will write or not-write on any given day, based on last year’s calendar. (If you’re like me, your Markov chain will be more likely to predict that you’ll write tomorrow if you wrote today, and more likely to predict not-writing tomorrow if you didn’t write today.)

What Does This Have to Do With Predictive Text?

Predictive text algorithms are Markov chains. They analyze words you have input in the past (or in the case of Botnik, how often words appear in proximity to other words) in order to predict the probability of you jumping to a particular word from the state “the word you just wrote.”

Why Writing With Predictive Text is Hard

You don’t need to understand the nuances of Markov chains to grasp that a book written by one would be tough to produce – but that understanding does make it easier to explain why.

Markov Has a Short Memory

As mentioned above, Markov chains have a limited ability to adjust their predictions based on factors like how frequently a state appears or how often it appears relative to (as in, before or after) other states.

The key word in that sentence is limited.

Markov chains don’t have any memory of the past. They can tell you which word is most likely to appear after this word, but they can’t tell you whether that prediction has already appeared 500 times or not at all.

In online predictive-text memes, this means that some results get stuck in an endless loop. For instance:

Predictive text meme Tweet

Predictive text meme Tweet that reads “Trans people are going to be a good time to get a chance to look at the time to get a chance to look at the time to get a chance to look at the time….” A response reads “Ok but did you get a chance to look at the time?”

This was a response to a predictive-text meme on Twitter that challenged people to type “Trans people are” into their phones and then hit the predictive-text suggestion to generate a result. This Twitterer’s predictive text got caught in a loop pretty quickly – it doesn’t recognize that it said “time to get a chance to look at the” already. It takes another human to save the joke here: “Ok but did you get a chance to look at the time?”

What Does This Mean for a Predictive-Text Novel?

A Markov chain’s predictive limitations pose two problems for long-form creative text generation:

  1. The Markov chain can get stuck. The more common a word is, the more likely it is to get stuck. “A,” “and,” “the,” “of,” and similar function words can easily trap the chain.
  2. Novels depend on memory. Story development requires attention to what came before. Predictive text, however, can only predict what word is most likely to come next. They can’t do that in the context of prior theme, character or plot development.

The results, therefore, are more likely to be incomprehensible than anything else – at least without careful editing. (I’ll get to that below.) For some examples of absurdist Markov chain results, see r/SubredditSimulator, which consists entirely of Reddit posts by Markov chains.

The Raw Material Blues

While generating last year’s various holiday posts on Botnik, I quickly discovered that the raw material fed to the predictive text generator makes a huge difference in the quality of the output.

If you’ve read the post series, you may have noticed a trend: In each one, I note that I fed “the first page of Google search results” or “the first twenty” Google search results” to Botnik (those are the same number, by the way). Why so specific?

It appears that the minimum size of the text bank Botnik requires to produce text that is funny but not incomprehensible is 20:1. In other words, if I wanted a blog-post-sized text, I needed to put in at least 20 texts of equal or greater length.

Twenty to one might even be undershooting it. Most of my predictive-text posts are around 500 words, while the top Google results from which they were generated tended to be 1,500 to 2,000 words.

What Does This Mean for a Predictive-Text Novel?

I haven’t tested this ratio on anything longer than a blog post. I do not, however, have any reason to believe that the ratio would be smaller for a novel. In fact, I predict the ratio would be larger for a coherent novel that looked sufficiently unlike its predecessor to survive a copyright challenge.

In every holiday blog post I generated via predictive text, the generator got “stuck” in a sentence of source text at least once. In other words, the Markov chain decided that the most likely word to follow the one on screen was the next word that already existed in a sentence somewhere in my source text.

When generating text from Google’s top twenty blog posts on the history of Thanksgiving, for instance, it was pretty easy to pick up on these sticking points. I didn’t have the entire source text memorized, but I knew my Thanksgiving history well enough to recognize when Botnik was being unfunnily accurate.

For a predictive-text novel of 70,000 words, one would need:

  1. Approximately 1.4 million words of source text (minimum), or about twenty 70,000-word novels, and
  2. A sufficient knowledge of that source text to recognize when the predictive text generator had gotten stuck on a single sentence or paragraph.

Point 2 has some creative opportunities. A predictive-text novella based on Moby-Dick, for instance, might benefit from repeating a large chunk of Moby-Dick verbatim (said novella would need to stay under 10,455 words to fit within the source text limitations, if you’re wondering). But the writer would still have to know Moby-Dick well enough to recognize when predictive text was simply reciting the book versus when it wasn’t:

 We, so artful and bold, hold the universe? No! When in one’s midst, that version of Narcissus who for now held somewhat aloof, looking up as pretty rainbows in which stood Moby-Dick. What name Dick? or five of Hobbes’ king? Why it is that all Merchant-seamen, and also all Pirates and Man-of-War’s men, and Slave-ship sailors, cherish such a scornful feeling towards Whale-ships; this is a question it
would be hard to answer. Because, in the case of pirates, say, I should
like to know whether that profession of theirs has any peculiar glory
about it. Blackstone, soon to attack of Moby-Dick; for these extracts of whale answered; we compare with such. That famous old craft’s story of skrimshander storms upon this grand hooded phantom of honor!

A Future for Creative Writing?

I learned with the first predictive-text holiday post that I couldn’t accept the predictive-text generator’s first suggestion every time, nor could I click suggestions at random. I was still writing; it’s just that I was choosing the next word in each sentence from a predictive-text generator’s suggestions, not from my own much larger vocabulary.

Many conversations about predictive-text creative writing suggest or assume that predictive-text will eventually take over our own creative processes – that it will supplant writing rather than support it. Not in its current form, it won’t.

For me, some aspects of writing via predictive text are actually harder than writing on my own. The Markov chain frequently backs into function-word corners and has to be saved with the judicious application of new content words. Punctuation is typically absent. Because the algorithm has no idea what it wrote previously, it doesn’t know how to stay on topic, nor does it know how to build coherent ideas over time.

Everything it couldn’t do, I had to do – and I had to do it with my next word choice perpetually limited to one of eighteen options.

That said, I love the idea that predictive-text authoring could arise as an art form within writing itself. Predictive text generators challenge us to engage with the art and craft of writing in new ways. They set new limitations, but they also suggest new possibilities. In so doing, they create an opportunity to engage with writing in new – and often hilarious – ways.

Anyway, here’s Wonderwall:

So maybe
Ya go to sadness baby
Cause when you tried
I have wasted dreams


Support the arts: leave me a tip or share this post on social media.

Standard