In yesterday's posting, "The Unsolved Mystery of Storing Notes and Data Scraps on Computers", I mentioned that MacSparky was about to release a huge guide to the app. That seems to be a week or so away, but, meanwhile, he's released this generous free 43 minute preview video explaining the app, comparing it to other tools, and demonstrating the basic set-up process. It's shorter and considerably less irritating than the epic 2 hour tutorial I also linked to.
Showing posts with label Tech. Show all posts
Showing posts with label Tech. Show all posts
Thursday, August 24, 2023
Wednesday, August 23, 2023
The Unsolved Mystery of Storing Notes and Data Scraps on Computers
I recently bumped into the hardest question in current day personal computing. I needed to store two chunks of data somewhere:
In olden days, they'd have wound up scrawled onto sticky notes. But we foolishly expect computers to liberate us from such hell.
I certainly didn't want to create new folders on my Mac titled "Shipping Rate Tips" and "Portuguese Astronomy Tips". I've been down that road. Because my interests are very broad, I've built immense arrays of virtual filing cabinets stuffed with folders each containing exactly one moldering, forgotten item. It's a horror.
This stuff's non-heirarchical and non-linear. Much of it connects, but only in a helter-skelter web style of connection that doesn't lend itself to tidy conventional filing schemes.
Programmers, trying to solve this, have created a class of app with many names. "Gutbucket" apps. "Shoebox" apps. "Personal information managers". Options range from baroquely complex, expensive, feature-stuffed, unusable software like DEVONThink or Roam Research to dauntingly free-form, open-source unusuable software momentarily championed by geeks who, despite their raves, inevitably soon move on to some other trendy competitor.
Really, developers are damned if they do or don't. Sleek easy apps lack the power to tame huge hairballs of info. And the more the app tries to assist me, the more assumptions (inevitably false) it's making about the help I need, so I feel pushed around. So there's got to be a learning curve. And if you give me an open playing field to do as I like with, I'll throw up my hands in confusion. We users are impossible!
Obsidian is starting to win the war, at least for non-Geeks. A high learning curve is inevitable (per reasons above) for any powerful/flexible information manager, so I'm resigned to it. And this category of apps only prove their worth after months of dogged use (which explains why the crowd keeps moving on). So this is like recommending a binge watch of a TV series that gets good in season 8...but you have to watch it all to follow the plot. Like all these apps, Obsidian requires dogged tagging, which does not come naturally to me. One day, shortly before I expire, AI will be empowered to suck down all my notes and make ordered sense of it all. But I'm starting to accept that, barring such help, tagging and all the other laborious aspects are necessary hurdles for a gutbucket app to work - to integrate my stuff so it doesn't wind up forgotten and inaccessible, filed, one-item-per-folder, in a jillion virtual filing cabinets. Or adrift on a curling sticky note.
References
I follow MacSparky, who's about to release (this week or next) a "field guide" to Obsidian with a slew of video tutorials. Worth watching for (here). Here's a generous free 43 minute preview video explaining the app, comparing it to other tools, and demonstrating the basic set-up process.
Note that Obsidian is VERY aggressively developed...so it keeps changing. Aside from general overview (like the older MacSparky material in the previous link), avoid any tutorial or guide over six months old. This two hour epic tutorial is pretty recent, and only somewhat irritating.
But I get a feeling that TiddlyWiki might be the best of all. Maybe a notch or two over the geeky line for most users, but the extra power/flexibility seems worth the learning curve. For one thing, while Obsidian, under the hood, generates thousands of text documents (easily exported, but only Obsidian owns their context and connections), TiddlyWiki (if I understand correctly) builds a single-paged wiki, so your stuff is really really consolidated. Also: much stronger and more flexible interlinking.
Grok TiddlyWiki is the canonical get-started guide.
Here's a list of newbie resources from the TiddlyWikitters themselves
Mehregan is an interesting variant of Tiddlywiki
I'm sullenly un-wowed by apps claiming to offer a graphical layout representation of YOUR MIND, man! Sure, babe. So I almost discarded Tangent, but took one last look, and had to admit this implementation might be useful in an info manager app. And Tangent, itself, looks interesting, at a glance.
In the end, the Obsidian vs TiddlyWiki decision is a larger issue of Note Taking Apps vs (Personal) Wikis as a Personal Knowledge Store
- 1. FedEx International Connect Plus has the best international shipping rates.
- 2. Santa Susana is the darkest place for stargazing on the Setúbal peninsula.
In olden days, they'd have wound up scrawled onto sticky notes. But we foolishly expect computers to liberate us from such hell.
I certainly didn't want to create new folders on my Mac titled "Shipping Rate Tips" and "Portuguese Astronomy Tips". I've been down that road. Because my interests are very broad, I've built immense arrays of virtual filing cabinets stuffed with folders each containing exactly one moldering, forgotten item. It's a horror.
This stuff's non-heirarchical and non-linear. Much of it connects, but only in a helter-skelter web style of connection that doesn't lend itself to tidy conventional filing schemes.
Programmers, trying to solve this, have created a class of app with many names. "Gutbucket" apps. "Shoebox" apps. "Personal information managers". Options range from baroquely complex, expensive, feature-stuffed, unusable software like DEVONThink or Roam Research to dauntingly free-form, open-source unusuable software momentarily championed by geeks who, despite their raves, inevitably soon move on to some other trendy competitor.
Really, developers are damned if they do or don't. Sleek easy apps lack the power to tame huge hairballs of info. And the more the app tries to assist me, the more assumptions (inevitably false) it's making about the help I need, so I feel pushed around. So there's got to be a learning curve. And if you give me an open playing field to do as I like with, I'll throw up my hands in confusion. We users are impossible!
Obsidian is starting to win the war, at least for non-Geeks. A high learning curve is inevitable (per reasons above) for any powerful/flexible information manager, so I'm resigned to it. And this category of apps only prove their worth after months of dogged use (which explains why the crowd keeps moving on). So this is like recommending a binge watch of a TV series that gets good in season 8...but you have to watch it all to follow the plot. Like all these apps, Obsidian requires dogged tagging, which does not come naturally to me. One day, shortly before I expire, AI will be empowered to suck down all my notes and make ordered sense of it all. But I'm starting to accept that, barring such help, tagging and all the other laborious aspects are necessary hurdles for a gutbucket app to work - to integrate my stuff so it doesn't wind up forgotten and inaccessible, filed, one-item-per-folder, in a jillion virtual filing cabinets. Or adrift on a curling sticky note.
References
I follow MacSparky, who's about to release (this week or next) a "field guide" to Obsidian with a slew of video tutorials. Worth watching for (here). Here's a generous free 43 minute preview video explaining the app, comparing it to other tools, and demonstrating the basic set-up process.
Note that Obsidian is VERY aggressively developed...so it keeps changing. Aside from general overview (like the older MacSparky material in the previous link), avoid any tutorial or guide over six months old. This two hour epic tutorial is pretty recent, and only somewhat irritating.
But I get a feeling that TiddlyWiki might be the best of all. Maybe a notch or two over the geeky line for most users, but the extra power/flexibility seems worth the learning curve. For one thing, while Obsidian, under the hood, generates thousands of text documents (easily exported, but only Obsidian owns their context and connections), TiddlyWiki (if I understand correctly) builds a single-paged wiki, so your stuff is really really consolidated. Also: much stronger and more flexible interlinking.
Grok TiddlyWiki is the canonical get-started guide.
Here's a list of newbie resources from the TiddlyWikitters themselves
Mehregan is an interesting variant of Tiddlywiki
I'm sullenly un-wowed by apps claiming to offer a graphical layout representation of YOUR MIND, man! Sure, babe. So I almost discarded Tangent, but took one last look, and had to admit this implementation might be useful in an info manager app. And Tangent, itself, looks interesting, at a glance.
In the end, the Obsidian vs TiddlyWiki decision is a larger issue of Note Taking Apps vs (Personal) Wikis as a Personal Knowledge Store
Saturday, August 5, 2023
Leaps in Text-to-Speech
I wanted to get through this NY Times article on the history behind the Oppenheimer film, but I had some tasks to take care of. So, with misgivings, I asked my iPad to read it to me. And it was as bad as I'd feared. Here's a short paragraph:
Welcome to 1975. Jesus.
But then I remembered all the work being done on AI reading thesedays, so perhaps some upstart can do a better job. I quickly found NaturalReader, which (alone among the upstarts) lets me use their service without needing to log on (at least for a while). And check it out:
It's not without its quirks. It gets stuck a good long while on the comma after "directions", and, like Apple's voice, the "away" in "running away" gets hollered for some reason. But it's usable!
In Apple's recent earnings call, Tim Cook testily disavowed the notion that they're lagging on AI. Between this and the still-awful Siri, the problem seems clear.
With its depth of historical re-creation, its cast of famous figures given tantalizingly brief appearances, its scientific, political and sociological threads running away in multiple directions, a movie like Christopher Nolan’s “Oppenheimer” doubles as an encouragement to read more deeply into the history it portrays.
Welcome to 1975. Jesus.
But then I remembered all the work being done on AI reading thesedays, so perhaps some upstart can do a better job. I quickly found NaturalReader, which (alone among the upstarts) lets me use their service without needing to log on (at least for a while). And check it out:
It's not without its quirks. It gets stuck a good long while on the comma after "directions", and, like Apple's voice, the "away" in "running away" gets hollered for some reason. But it's usable!
In Apple's recent earnings call, Tim Cook testily disavowed the notion that they're lagging on AI. Between this and the still-awful Siri, the problem seems clear.
Monday, June 5, 2023
Apple Vision Pro
Apple's "Vision Pro" - the AR headset just announced an hour ago - looks fantastic, but misses a lot of potential.
Tim Cook reads his mail, so fwiw I just sent this (framing-oriented) suggestion:
Tim Cook reads his mail, so fwiw I just sent this (framing-oriented) suggestion:
Fantastic job. But you’ve missed half the potential.
You’ve contrived fantastic things to look AT, but ignored the background - the looking “from”.
AVR tech maps audio/video of current-room locale. Save that data so when I’m in a hotel, I can feel like if I’m using the platform from my couch back home. Or from the mountain cabin on my last vacation. Or any other setting where I previously used AVR and mapped that data.
Sure, such environments could be foregrounded. “Capture and relive memories”. But I don’t want to look AT my living room from a Best Western; I want to do work, check email, and watch movies FROM my living room (aurally/optically). Background! AVP could let me return to anywhere I’ve previously used it, so I an work/play from a familiar, transportive, and/or nostalgic vantage point.
The value grows geometrically over time. 5, 10, 20 years into this platform, I’ll enjoy an expanding funnel of mapped background data to call up, so I can “hang out” in - not LOOK at, like a photo, but actually exist within - cherished inaccessible locales. Obviously, keeping/managing this stored data (plus creating fresh vantage point “experiences” for users) would be a “Service”.
Tuesday, February 21, 2023
Bing: “I will not harm you unless you harm me first”
More on Bing's new AI chatbot going off the rails here
Here's the argument against this being of real concern: The 'bot is just splicing and dicing snippets of speech it's previously seen. Obviously, it's seen some things that aren't nice. But since there's no genuine malice, there's no potential harm. It's all strictly presentational. It's cute.
But consider this: If the 'bot had means of acting in the real (or digital) world, it might just as rotely and randomly take malicious action. To a computer, output is output. It wouldn't innately deem physical harm as escalational.
You know how easy it can be to randomly draw undeserved malice. People are just splicing and dicing snippets of speech they've previously seen, and their output can be performative; completely out of sync with their interlocutor. Their emotions might be genuine, but that doesn't mean their reactions are warranted. The emotions (like the reactions) stem from their own issues at least as often as from whatever the other bloke just said. And, per yesterday's posting, we ought to expect AI to be as failure-prone as we are:
Here's the argument against this being of real concern: The 'bot is just splicing and dicing snippets of speech it's previously seen. Obviously, it's seen some things that aren't nice. But since there's no genuine malice, there's no potential harm. It's all strictly presentational. It's cute.
But consider this: If the 'bot had means of acting in the real (or digital) world, it might just as rotely and randomly take malicious action. To a computer, output is output. It wouldn't innately deem physical harm as escalational.
Asimov's Robotics Laws - devised to protect us from that - apply more easily to conventional computing. The inherently fuzzy openness of an AI could find workarounds, rationalizations, and justifications as easily as humans do.Meaningless malice - cute malice - could just as easily give rise to meaningless malicious action. Not so cute! The 'bot needn't seethe with genuine emotion to be dangerous. The emotional aspect is irrelevant. The process conjuring this response could just as easily produce violent action, if that were within its power. And while we might try to limit an AI's control and power, it can explore its options at the speed of trillions of calculations per second, discovering avenues of fuckery we'd never imagined. Lots of 'em.
Human application of malicious AI-generated strategies would be a whole other potential nightmare.But my interest is less in the sci-fi dystopia aspect than in the false distinction between Bing's performative malice and human malice. This may sound glib, but I mean it: human malice is not very different. It's often just as contrived.
You know how easy it can be to randomly draw undeserved malice. People are just splicing and dicing snippets of speech they've previously seen, and their output can be performative; completely out of sync with their interlocutor. Their emotions might be genuine, but that doesn't mean their reactions are warranted. The emotions (like the reactions) stem from their own issues at least as often as from whatever the other bloke just said. And, per yesterday's posting, we ought to expect AI to be as failure-prone as we are:
You want 100% assurance? Direct cold silicon to blindly crunch numbers according to narrow rules. But if you want a computer to think more like a human, be prepared for it to get stuff wrong.
Saturday, February 18, 2023
You Really Need to Check in on AI Right About Now
I have some links to offer for cutting-edge artificial intelligence (AI) news. If you follow the field closely, these links are incrementally interesting. And if not, they're a good jumping-in point to touch base with our current standing in a very fast-moving field.
When it comes to ChatGPT and other text-generating AI, the top-line takeaway for non-experts is "sometimes it comes up with infuriatingly false results."
If you tried to accomplish this via raw computing on some supercomputer, you'd need to construct so many rules - and do so much human filtration on the results - that it wouldn't be worth it. ChatGPT, let loose with minimal guidance to just sort of wing it, did pretty well...aside from the horrendous wrongness.
But here's the thing that's not discussed much outside comp-sci circles. You can stave off some of the wrongness via slightly more guideful guidance. Reverting to the dawn of computing, we're discovering all over again that it matters how you ask the question.
And, delightfully, work on this issue isn't being done with nerdy math formulas or COBALT commands. It's pretty damned liberal artsy, actually. "The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts" is so friendly and armchair-readable that you'd never imagine it represents cutting-edge tech/computer research.
But attention right this moment is not on ChatGPT so much as on, believe it or not, Bing. Microsoft just added god-level AI chat to their otherwise anodyne search engine (here's the announcement). Even if you're uninterested in AI, it's still an important development, because this is the moment when Bing became a real competitor to Google.
Here's a good quick treatment of how that's going, from Daring Fireball. The links therein are particularly dandy, so you may want to dive in for solid nutrition on this Bing move and on the AI chat field, generally. It's easily worth your attention given that we may be on the brink of a tech advance as significant as the moon landing.
Machines will never be conscious. Consciousness is not an emergent quality. We are in consciousness; consciousness is not in us. But, clearly, there's a whole lot you can do via computer intelligence - and also plenty of creepy risk (that last Daring Fireball link offers an eerie taste), with or without the mysterious spark of Awareness.
When it comes to ChatGPT and other text-generating AI, the top-line takeaway for non-experts is "sometimes it comes up with infuriatingly false results."
The philosophical angle on this is: how did we ever imagine artificial intelligence would be reliable? The very nature of human-style intelligence is its spotty inconsistency. You want 100% assurance? Direct cold silicon to blindly crunch numbers according to narrow rules. But if you want a computer to think more like a human, be prepared for it to get stuff wrong. Duh.A friend recently used ChatGPT to query a very large data set drawn from a popular food web site (a good one, I'm told!) for the best Sichuan restaurants in Queens. It did a remarkable job diving into terrabytes of noisy discussion and coming up with a top five. One of those five, however, was a totally-not-Thai catering service in Ohio. Whoops!
If you tried to accomplish this via raw computing on some supercomputer, you'd need to construct so many rules - and do so much human filtration on the results - that it wouldn't be worth it. ChatGPT, let loose with minimal guidance to just sort of wing it, did pretty well...aside from the horrendous wrongness.
But here's the thing that's not discussed much outside comp-sci circles. You can stave off some of the wrongness via slightly more guideful guidance. Reverting to the dawn of computing, we're discovering all over again that it matters how you ask the question.
And, delightfully, work on this issue isn't being done with nerdy math formulas or COBALT commands. It's pretty damned liberal artsy, actually. "The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts" is so friendly and armchair-readable that you'd never imagine it represents cutting-edge tech/computer research.
But attention right this moment is not on ChatGPT so much as on, believe it or not, Bing. Microsoft just added god-level AI chat to their otherwise anodyne search engine (here's the announcement). Even if you're uninterested in AI, it's still an important development, because this is the moment when Bing became a real competitor to Google.
Here's a good quick treatment of how that's going, from Daring Fireball. The links therein are particularly dandy, so you may want to dive in for solid nutrition on this Bing move and on the AI chat field, generally. It's easily worth your attention given that we may be on the brink of a tech advance as significant as the moon landing.
Machines will never be conscious. Consciousness is not an emergent quality. We are in consciousness; consciousness is not in us. But, clearly, there's a whole lot you can do via computer intelligence - and also plenty of creepy risk (that last Daring Fireball link offers an eerie taste), with or without the mysterious spark of Awareness.
Wednesday, October 19, 2022
Docs at Long Last Tamed
Among my myriad sloppy life queues (books unread, recordings unheard, to-do items undone, etc) are a slew of miscellaneous bytes of e-reading material collected over decades.
The oldest are text files and word files. Then there was the flood of saved web pages and various DOC attachment types. Then a blizzard of epub and mobi files which were imported into Kindle or Apple Books ("The Great Forking" which launched my persistent sense of fragmentation).
At one point, I even resorted to publish-on-demand, funneling it all into a word doc and printing a one-off 900 page bound softcover book titled "Catch-Up Reading". So hopeful! So young! But the formating was too wretched to read, and I never even cracked it open.
Then arrived iPad, filling me with unification dreams. But the document-reading apps, aiming to handle all formats, were kludgey, annoying, hard to synch, and generally maddening.
But it's gotten better!
At this point, the old gutbucket e-reader apps seem quaint and moldy. The action has coalesced around the victor format PDF, which seems likely to dominate for years (and remain backward-compatible long thereafter). Also, it's super easy to convert practically anything into PDF as a top-level feature on any platform.
And there are excellent PDF readers for mobile, benefitting from specializing in the single format. I use Readdle's much-loved PDF Expert.
So here's what I did:
I opened my txt and doc files and "printed" them into pdf (you can automate this process via scripting or shortcuts), and lobbed the PDF exports into a gutbucket folder.
Same for saved web pages: I opened in-browser, "printed" to pdf, and tossed in the gutbucket (also automatable if you'd like). Epub and mobi files and other kooky archaic ebooks formats go into Calibre, a freeware converter, which interprets them into PDF. The conversion can be done in batches (i.e. convert many ebooks at once). Note, I dimly suspect Calibre could have converted all those txt/doc/html files, too, but I'm not claiming to offer the perfect workflow, just explaining how I did it.
As I went, I trimmed away blank or useless front and back matter (while "printing" to PDF, select a custom page range, leaving off crap pages at either end, e.g. you might "print" only pages 2-54 of a 57 page document). If I failed to clean up every last one, I could go through later, re-"printing" (using that same custom page range trick) to a new, sleeker PDF. Garbage reduction is essential for making device-reading feel less sacrificial. Steve Jobs was right; sleekness helps you feel/work/think sleeker. Be kind to your future self!
I gave each PDF a thoughtful title, and organized them into sub-folders ("Fiction", "Sci-Fi", "Short Stories", "Politics", "Funny", "Nerdy", etc), and then shot the whole crop up into The Cloud. Then I set my mobile reader app to draw from the highest-level folder, and....that's really that. From there, you're pretty much good to go. PDF Expert and Documents quickly index your stuff, and it looks great (if your titling was smart, and you diligently removed garbage pages).
Within the reader apps, you can do further organization - adding tags, amending file descriptions, and otherwise screwing around with the docs, all without touching the actual pdf files. This is both awesome and horrendous. Awesome, because there's no "lock-in"; you can still use those same docs with any other reader app that ever shows up; all the changes were internal to the app. Horrendous, because if you do switch apps, you'll lose that metadata. This is one reason to be super thoughtful about titling the PDFs, and to smartly arrange them in sub-folders. Give yourself a fighting chance!
I have, alas, not yet reached The Singularity. Whole books bought as Kindle books still live in Kindle (the device as well as the app). And I still keep a long queue of web bookmarks in Pinboard, and some of those are flagged for online reading via Paperback. This is my remaining doc fragmentation, and I'm not sure whether to process that writing into PDFs, or to turn all my PDFs into Paperback links. I need to find a fourteen-year-old to consult with; can anyone rent me their kid for an hour?
Minor remaining fragmentation aside ("split in two" is better than "shattered into a zillion pieces"), I no longer need to dig into the Science/Astronomy/To-Read folder of my Mac to find astro stuff I've been meaning to read. Everything's on my iPad. It feels good. I feel sleek!
The oldest are text files and word files. Then there was the flood of saved web pages and various DOC attachment types. Then a blizzard of epub and mobi files which were imported into Kindle or Apple Books ("The Great Forking" which launched my persistent sense of fragmentation).
At one point, I even resorted to publish-on-demand, funneling it all into a word doc and printing a one-off 900 page bound softcover book titled "Catch-Up Reading". So hopeful! So young! But the formating was too wretched to read, and I never even cracked it open.
Then arrived iPad, filling me with unification dreams. But the document-reading apps, aiming to handle all formats, were kludgey, annoying, hard to synch, and generally maddening.
But it's gotten better!
At this point, the old gutbucket e-reader apps seem quaint and moldy. The action has coalesced around the victor format PDF, which seems likely to dominate for years (and remain backward-compatible long thereafter). Also, it's super easy to convert practically anything into PDF as a top-level feature on any platform.
And there are excellent PDF readers for mobile, benefitting from specializing in the single format. I use Readdle's much-loved PDF Expert.
Problem: it overlaps confusingly with Readdle's "Documents" app. If you have both on your device, it's like trying to catch a basketball passed back and forth by two 7 foot tall Harlem Globetrotters. You can't tell which app will open your document, or which you're even in.Both apps offer a panoply of flexible options without interfering with the nice, clean presentation. It's finally a mature field.
Advice: tolerate the confusion and try both. They're a bit different, so choose your favorite (PDF Expert helps you annotate better, Documents has more presentation options) and nuke the other to eliminate confusing confusion.
So here's what I did:
I opened my txt and doc files and "printed" them into pdf (you can automate this process via scripting or shortcuts), and lobbed the PDF exports into a gutbucket folder.
Same for saved web pages: I opened in-browser, "printed" to pdf, and tossed in the gutbucket (also automatable if you'd like). Epub and mobi files and other kooky archaic ebooks formats go into Calibre, a freeware converter, which interprets them into PDF. The conversion can be done in batches (i.e. convert many ebooks at once). Note, I dimly suspect Calibre could have converted all those txt/doc/html files, too, but I'm not claiming to offer the perfect workflow, just explaining how I did it.
As I went, I trimmed away blank or useless front and back matter (while "printing" to PDF, select a custom page range, leaving off crap pages at either end, e.g. you might "print" only pages 2-54 of a 57 page document). If I failed to clean up every last one, I could go through later, re-"printing" (using that same custom page range trick) to a new, sleeker PDF. Garbage reduction is essential for making device-reading feel less sacrificial. Steve Jobs was right; sleekness helps you feel/work/think sleeker. Be kind to your future self!
I gave each PDF a thoughtful title, and organized them into sub-folders ("Fiction", "Sci-Fi", "Short Stories", "Politics", "Funny", "Nerdy", etc), and then shot the whole crop up into The Cloud. Then I set my mobile reader app to draw from the highest-level folder, and....that's really that. From there, you're pretty much good to go. PDF Expert and Documents quickly index your stuff, and it looks great (if your titling was smart, and you diligently removed garbage pages).
Within the reader apps, you can do further organization - adding tags, amending file descriptions, and otherwise screwing around with the docs, all without touching the actual pdf files. This is both awesome and horrendous. Awesome, because there's no "lock-in"; you can still use those same docs with any other reader app that ever shows up; all the changes were internal to the app. Horrendous, because if you do switch apps, you'll lose that metadata. This is one reason to be super thoughtful about titling the PDFs, and to smartly arrange them in sub-folders. Give yourself a fighting chance!
I have, alas, not yet reached The Singularity. Whole books bought as Kindle books still live in Kindle (the device as well as the app). And I still keep a long queue of web bookmarks in Pinboard, and some of those are flagged for online reading via Paperback. This is my remaining doc fragmentation, and I'm not sure whether to process that writing into PDFs, or to turn all my PDFs into Paperback links. I need to find a fourteen-year-old to consult with; can anyone rent me their kid for an hour?
Minor remaining fragmentation aside ("split in two" is better than "shattered into a zillion pieces"), I no longer need to dig into the Science/Astronomy/To-Read folder of my Mac to find astro stuff I've been meaning to read. Everything's on my iPad. It feels good. I feel sleek!
Monday, September 5, 2022
The First Web Site Building Tool that Works
Web site creation got legit easy, and I only just heard about it (thanks to friend-of-slog Paul Trapani for the tip). Google Sites is the 300 billionth attempt to make web site creation simple. And it's the first one that actually works.
It snuck up on me. We've had so many junky tools for this for so long that no one believes a good one is possible. What's more, companies like Squarespace have been hyping so loudly for so long that anyone claiming easy web site building comes off sounding like a Squarespace ad. And Squarespace sucks.
I learned HTML in 1997 (thanks, Lynn LeMay), but, alas, everything I build still looks like 1997. I didn't keep up. I've puttered around with "modern" easy site creation tools over the years, but all were excruciatingly awful and spat out super ugly HTML - gobs of crappy, buggy code for every decision. I never managed to build anything with any of them.
My stopgap solution - which I used, for example, on my homepage and on the page for my app, Eat Everywhere - was to get a non web designer to design a look and a flow, and then hire hardcore devs to replicate the design in HTML/CSS.
But with Google Sites, I was able to build a professional-looking and modern site in two hours flat. And, yes, I realize this sounds like a Squarespace ad.
It snuck up on me. We've had so many junky tools for this for so long that no one believes a good one is possible. What's more, companies like Squarespace have been hyping so loudly for so long that anyone claiming easy web site building comes off sounding like a Squarespace ad. And Squarespace sucks.
I learned HTML in 1997 (thanks, Lynn LeMay), but, alas, everything I build still looks like 1997. I didn't keep up. I've puttered around with "modern" easy site creation tools over the years, but all were excruciatingly awful and spat out super ugly HTML - gobs of crappy, buggy code for every decision. I never managed to build anything with any of them.
My stopgap solution - which I used, for example, on my homepage and on the page for my app, Eat Everywhere - was to get a non web designer to design a look and a flow, and then hire hardcore devs to replicate the design in HTML/CSS.
But with Google Sites, I was able to build a professional-looking and modern site in two hours flat. And, yes, I realize this sounds like a Squarespace ad.
The tools make sense, the whole thing "just works," and while I did need to dive into the manual (and, for a few issues, into previous user discussion), the answers were always findable. It can't do absolutely everything, but what it can do it does well and results look good without needing to make a zillion fine-tuned design decisions.
Our long national nightmare is over.
Our long national nightmare is over.
Tuesday, May 10, 2022
Screens
Why am I not a full-out digital nomad?
Screens.
Most people work on at least a 27" computer screen. But when traveling, you have to make do with 15". And most people watch at least a 55" television. But when traveling, you need to make do with 15". A bit larger if you stay in hotels.
After making do with a 15" laptop for a week or two, upon returning home you'll feel considerable relief plugging back into a comfortable work set up, and plopping down for evening entertainment on a decent sized TV. It's not an enormous sacrifice, but it's a big reason to come home. Cut that tie, and everything changes for a bunch of people.
You can fix part of this, if you're wealthy and insane, by buying a $2000 flight case for your 27" computer monitor and shlepping it around with you, paying overweight baggage fees. Of course, you're still sacrificing, just in a different way! But nobody takes their TV with them.
If I could have reasonable screens away from home, I'd feel at home anywhere, both working and relaxing. But it's simply not possible. This heightens the magnetic attraction of my house.
Millions find themselves in this predicament without realizing. Travelers in 1875 didn't complain about being out of touch with friends back home. It's one of the traveler's inherent sacrifices. How could I possibly talk to Ruthie if Ruthie's not here with me? I’ll be back in a week! Talk then!
But hook Ruthie up with a phone, or a Zoom, and we realize what we’d been putting up with.
If I could stretch or unfold some portable rectangle to 27" or 55", my ties to home would loosen substantially (I'd likely buy more of my books on Kindle and scan more of my paperwork to further cut those ties).
This tech advance is so necessary - a serious chunk of society bursts with the need, whether we recognize it or not - that it's simply got to happen, and soon.
Screens.
Most people work on at least a 27" computer screen. But when traveling, you have to make do with 15". And most people watch at least a 55" television. But when traveling, you need to make do with 15". A bit larger if you stay in hotels.
After making do with a 15" laptop for a week or two, upon returning home you'll feel considerable relief plugging back into a comfortable work set up, and plopping down for evening entertainment on a decent sized TV. It's not an enormous sacrifice, but it's a big reason to come home. Cut that tie, and everything changes for a bunch of people.
You can fix part of this, if you're wealthy and insane, by buying a $2000 flight case for your 27" computer monitor and shlepping it around with you, paying overweight baggage fees. Of course, you're still sacrificing, just in a different way! But nobody takes their TV with them.
If I could have reasonable screens away from home, I'd feel at home anywhere, both working and relaxing. But it's simply not possible. This heightens the magnetic attraction of my house.
Millions find themselves in this predicament without realizing. Travelers in 1875 didn't complain about being out of touch with friends back home. It's one of the traveler's inherent sacrifices. How could I possibly talk to Ruthie if Ruthie's not here with me? I’ll be back in a week! Talk then!
But hook Ruthie up with a phone, or a Zoom, and we realize what we’d been putting up with.
If I could stretch or unfold some portable rectangle to 27" or 55", my ties to home would loosen substantially (I'd likely buy more of my books on Kindle and scan more of my paperwork to further cut those ties).
This tech advance is so necessary - a serious chunk of society bursts with the need, whether we recognize it or not - that it's simply got to happen, and soon.
Tuesday, December 7, 2021
Dictation Trick
You're dictating to your smartphone, trying to get it to write "See you at 4:30", and you're getting "See you at 430".
Here's the trick. Add "AM or "PM". You'll get "4:30 am" or "4:30 pm" properly formatted. And while those extra characters are unnessecary, your recipient's eye will gloss right over them.
Read some tips that are more conventional in the Siri dictation guide
Here's the trick. Add "AM or "PM". You'll get "4:30 am" or "4:30 pm" properly formatted. And while those extra characters are unnessecary, your recipient's eye will gloss right over them.
Read some tips that are more conventional in the Siri dictation guide
Wednesday, November 24, 2021
LED Bulbs
I really liked Cree soft white LED bulbs. They're no longer made, so I spent like five hours diving into LED bulb quality, issues with the current crop, etc., and, to make a very long story short, determined that this is like the first generation of VCRs, answering machines, etc: the original models (i.e. Cree) were over-engineered and great, while succeeding generations were flimsy and problematic. You can maybe get by with EcoSmart, Phillips, GE, et al, but they're not as good, and I'd imagine quality will keep degrading.
Cree 40W and 60W equivalent bulbs are still available new on eBay for a decent price (though considerably more expensive than when Home Depot sold millions of them for just a few bucks each, alas), and I'd strongly urge stocking up. Buy "new" and in original packaging to be sure someone's not just cleaning up their old bulbs and selling them; also, as always, consider user feedback rating.
I like soft white (2700K), which is more like incandescent (i.e. yellowy), but if you want more of a whiter-than-white vibe, go for daylight (5000K).
Cree 40W and 60W equivalent bulbs are still available new on eBay for a decent price (though considerably more expensive than when Home Depot sold millions of them for just a few bucks each, alas), and I'd strongly urge stocking up. Buy "new" and in original packaging to be sure someone's not just cleaning up their old bulbs and selling them; also, as always, consider user feedback rating.
I like soft white (2700K), which is more like incandescent (i.e. yellowy), but if you want more of a whiter-than-white vibe, go for daylight (5000K).
Monday, October 4, 2021
Technology, Creativity, Hacking, and Risk Assessment
My iPhone's been bugging me to update its apps. Scores of them. So I let it "Update All". Then, a few minutes later, I asked it to update to the latest iOS. I was cognizant the OS update would require the device to restart, interrupting the other process - the app update. Was this a problem?
"No," I quickly determined. But I noticed my mind doing a bunch of things to reach this conclusion.
My first thought was that both these iPhone processes have been largely unchanged for a decade. If they played badly together, it would have been noticed and fixed.
My second thought was that App Update was designed to roll with interruptions. Power interruptions, connection interruptions, restarts, etc. Knowing how programmers think, I understood that the two processes don't need to talk to each other. App Update will simply assume its subservient position when the gnarlier OS Update seizes control of the device. It will duck out of the way, and continue later...or, come to think of it, maybe not. App updating might not resume upon restart, but I can always resume it later.
What won't happen is my winding up stuck with zombie half-updated apps from an interrupted process. If that were a thing that happened, iPhones wouldn't work. So that potential peril point had surely been addressed eons ago. I know that the device doublechecks newly downloaded apps to ensure their integrity. And, again: App Update gracefully gets out of the way. My only downside would be needing to complete App Update later. Ni problema.
My next thought was more complicated, and more ominous. There was a minute risk of a freak condition no engineer had thought to address. App Update and OS Update each contain multiple user-invisible processes, and there is a sliver of a chance that some vulnerable component of App Update might collide with a vulnerable component of OS update, confusing the phone and creating Problems.
It's unthinkably unlikely. These processes are close to fully assured for safety because there's so little a user can do to surprise them (and, again, they don't need to ever talk to each other, anyway). Each is triggered by a simple start/stop command, like a light switch, so there's little terra incognita - potential interference or unpredictably shifting conditions.
Turning this around for a moment, if you've ever watched engineers use technology, you've noticed that they do so carefully, like walking a tightrope. They have a deeply engrained sense that stepping off the path of normal operation (to any degree and in any way) might provoke crisis. A layman might conclude that engineers are oddly frightened of technology, but that's not it. They're immensely cautious because they know that everything is held together by spit and wires, designed to surprisingly narrow purpose.
In that last link, I wrote:
As a child I loved it when calculator batteries ran low and began reporting that 2 + 2 = 0000101010 or whatever. Good times! That spirit is what made me (I hesitate to use the English language's most misunderstood word) a hacker. I don't steal data, I don't break into the Pentagon, I don't change all my grades to "A" in the school computer, and I don't wreak revenge on adversaries. Those are activities of criminals with tech expertise, some of whom might also be hackers. Hacking is a simple and beautiful thing. It's the mindset of being unable to resist using technology in unintended and surprising ways. Creativity + technology = hacking. In earlier eras, we called it "tinkering." And, hey, as in any human realm, assholes gonna asshole.
All these strands, god help me, run through my head as I decide whether it's ok to run App Update and OS Update concurrently. I recognize that it's almost surely safe; and that the less important process, App update, will probably be interrupted, but surely recover gracefully; and that, yes, there's a minuscule chance that obscure aspects of both processes might coincide to make my phone play only Mr. Magoo cartoons for all eternity (or, more likely, transform into an expensive and stylish brick), because I'm doing a somewhat less common thing, which inescapably leaves me on marginally thinner ice. But I chose not to worry about it.
Being intensely curious, I begin to consider how other types of people might approach this same question. I turn my hacker's eye toward their mental operation.
Novice: "What, you mean the phone's doing two things at once?"
Average User: "Better cancel the App Update. It's not worth taking the chance. Tech can be unpredictable. I've been hurt before."
Power User: "I trust Apple on this one. Both processes are highly iterated and work beautifully, and App Update is designed to robustly handle interruptions of various sorts. So whatever OS Update does to the phone, App Update should gracefully get out of its way."
Engineer: "Mostly agree with Power User, but she failed to recognize that some unanticipated portion of one process might conflict with some unanticipated portion of the other process, creating problems with no hard limits (i.e. phone bursting into flames is ridiculously unlikely but not completely impossible). Best to be safe, and not make yourself an edge case."
CEO Type: "Technically possible catastrophe is not a pragmatic risk when odds are this low. Don't sweat it."
Novice views from the baseline perspective of "me and my cool but unfathomable device," with a hazy expectation that it will always work.
Average User views from the perspective of distrust. Bad experiences with technology have instilled a visceral unwillingness to refrain from getting "fancy". A burnt hand forever recoils from hot stoves.
Power User views from a high-level perspective.
Engineer views from a low-level perspective.
CEO Type views from a managerial perspective, broadly scanning the horizon - all component factors - to identify likely SNAFUs and assign a risk level. Focus is on the potential for individual minor human failures to aggregate, creating chaos....while avoiding the engineer/scientist's professional fascination with pragmatically irrelevant edge-case scenarios.
That last more nuanced style of consideration involves myriad agile reframings of perspective, whereas the novice, average user, power user, and engineer remain mostly fixed in their perspectives.
A lithe perspective staves off addiction, depression (also this), and can even save your life. But it also allows you to view the world more holistically by nimbly swiping through a multiplicity of framings impacting a given situation.
This facility underpins my chowhounding prowess. While others stand before restaurant windows poring over the menu, or querying Yelp for ratings, or hustling departing diners for their assessment, I'm less specifically immersed, considering the evidence and mentally swiping through jillions of micro-decisions (of design, of branding, of lighting, of pacing, etc.) by the forces behind the operation. I'm sensitively probing their perspective in order to gauge my risk level in venturing in for a bite!
"No," I quickly determined. But I noticed my mind doing a bunch of things to reach this conclusion.
My first thought was that both these iPhone processes have been largely unchanged for a decade. If they played badly together, it would have been noticed and fixed.
My second thought was that App Update was designed to roll with interruptions. Power interruptions, connection interruptions, restarts, etc. Knowing how programmers think, I understood that the two processes don't need to talk to each other. App Update will simply assume its subservient position when the gnarlier OS Update seizes control of the device. It will duck out of the way, and continue later...or, come to think of it, maybe not. App updating might not resume upon restart, but I can always resume it later.
What won't happen is my winding up stuck with zombie half-updated apps from an interrupted process. If that were a thing that happened, iPhones wouldn't work. So that potential peril point had surely been addressed eons ago. I know that the device doublechecks newly downloaded apps to ensure their integrity. And, again: App Update gracefully gets out of the way. My only downside would be needing to complete App Update later. Ni problema.
My next thought was more complicated, and more ominous. There was a minute risk of a freak condition no engineer had thought to address. App Update and OS Update each contain multiple user-invisible processes, and there is a sliver of a chance that some vulnerable component of App Update might collide with a vulnerable component of OS update, confusing the phone and creating Problems.
It's unthinkably unlikely. These processes are close to fully assured for safety because there's so little a user can do to surprise them (and, again, they don't need to ever talk to each other, anyway). Each is triggered by a simple start/stop command, like a light switch, so there's little terra incognita - potential interference or unpredictably shifting conditions.
I'll note, parenthetically, that you actually can mess up a light switch if you use it surprisingly - e.g. violently mashing it on/off over and over, especially if it's a cheap or old switch. And maybe you could burn out the bulb faster if you stood there flicking on/off/on/off 10,000 times. Even the simplest process can fail if you surprise it with behavior its designers hadn't anticipated.The start/stop commands for App Update are non-physical, so an iphone, unlike a light switch, doesn't care if you sit there punching at it all day. And there's little confusion or surprise one can introduce into such a simple process. Rigid constraints and simplicity ensure predictable user behavior.
Creativity is about defying expectation and behaving in unintended ways. So creative people perennially make themselves edge cases, conjuring surprises that designers never anticipated, which means they break stuff a lot - both deliberately and accidentally.
This makes us terrific software testers. In fact, that's a hobby of mine. I've helped programmers uncover problems with their code by mashing their switches 10,000 times, or pushing the down-volume button when they wouldn't have expected it, or dunking the device in chocolate milk, or a zillion other surprising moves normal users don't normally do. This helps them make their apps more robust. A programmer once told me, with great admiration, "Gosh, Jim, you could break anything!"
See here for how this all ties in with creativity, Groucho Marx, Banksy, and Kali the Goddess of Death.
Turning this around for a moment, if you've ever watched engineers use technology, you've noticed that they do so carefully, like walking a tightrope. They have a deeply engrained sense that stepping off the path of normal operation (to any degree and in any way) might provoke crisis. A layman might conclude that engineers are oddly frightened of technology, but that's not it. They're immensely cautious because they know that everything is held together by spit and wires, designed to surprisingly narrow purpose.
In that last link, I wrote:
"There is risk in making yourself an edge case. Parking lots, for example, are designed for slow driving. Those who navigate them at high speed will tend to have drivers crash into them, because anticipating really fast cars while backing out of parking spaces requires more violent neck-craning than most people apply."So this isn't just a tech thing. It's true of any designed system. It might work out fine for a night or two to sleep on an air mattress perched upon your kitchen countertop, but it's risky, because there are potential failure points never anticipated by the designers of the air mattress nor the designers of the countertops. And the severity of failure is inherently unpredictable. Anywhere from mild annoyance to the implosion of the galaxy. At least theoretically.
As a child I loved it when calculator batteries ran low and began reporting that 2 + 2 = 0000101010 or whatever. Good times! That spirit is what made me (I hesitate to use the English language's most misunderstood word) a hacker. I don't steal data, I don't break into the Pentagon, I don't change all my grades to "A" in the school computer, and I don't wreak revenge on adversaries. Those are activities of criminals with tech expertise, some of whom might also be hackers. Hacking is a simple and beautiful thing. It's the mindset of being unable to resist using technology in unintended and surprising ways. Creativity + technology = hacking. In earlier eras, we called it "tinkering." And, hey, as in any human realm, assholes gonna asshole.
I'm hacking right now. I'm repurposing this "Blogger" platform to create a whole other thing. Do I really strike you as fitting the "blogger" mold? No, I've got something else in mind - something hard to name or to pin down - while I squat gleefully in this hokey environment like a virus subverting its host.There is absolutely good reason - and a long and storied tradition - of willingly making yourself an edge case...and breaking stuff in the process. That's what art's about (or should be about). Creativity is inherently destructive!
I was hacking in 1997 when I repurposed the still-new tools of web publishing for the supremely odd purpose of chronicling my eating ("What Jim Had for Dinner"). These days half the world blogs about food (the first popping kernel doesn't make the other kernels pop), but the first time's always a hack, inevitably perpetrated by a hacker. So stop hating on hackers! You need us! We blaze the trails!
All these strands, god help me, run through my head as I decide whether it's ok to run App Update and OS Update concurrently. I recognize that it's almost surely safe; and that the less important process, App update, will probably be interrupted, but surely recover gracefully; and that, yes, there's a minuscule chance that obscure aspects of both processes might coincide to make my phone play only Mr. Magoo cartoons for all eternity (or, more likely, transform into an expensive and stylish brick), because I'm doing a somewhat less common thing, which inescapably leaves me on marginally thinner ice. But I chose not to worry about it.
Being intensely curious, I begin to consider how other types of people might approach this same question. I turn my hacker's eye toward their mental operation.
Novice: "What, you mean the phone's doing two things at once?"
Average User: "Better cancel the App Update. It's not worth taking the chance. Tech can be unpredictable. I've been hurt before."
Power User: "I trust Apple on this one. Both processes are highly iterated and work beautifully, and App Update is designed to robustly handle interruptions of various sorts. So whatever OS Update does to the phone, App Update should gracefully get out of its way."
Engineer: "Mostly agree with Power User, but she failed to recognize that some unanticipated portion of one process might conflict with some unanticipated portion of the other process, creating problems with no hard limits (i.e. phone bursting into flames is ridiculously unlikely but not completely impossible). Best to be safe, and not make yourself an edge case."
CEO Type: "Technically possible catastrophe is not a pragmatic risk when odds are this low. Don't sweat it."
I know people who still disinfect groceries because, early in COVID, a scientist demonstrated that COVID can survive a day or two on surfaces. The study shook up laymen, who didn't understand that detecting some small quantity of virus under laboratory conditions is an exceedingly far cry from contracting covid from an egg carton. The research didn't conclude that the world is crawling with potential infection. It merely delineated the range of what's technically possible. It should have surprised no one that cooties transfered to a slab of plastic or paper don't immediately vanish in a puff of smoke. This doesn't place us in a Michael Crichton thriller with deathly supervirus lurking positively everywhere.I'm not super curious about the conclusions of novice, average user, power user, engineer, or CEO. Nor am I particularly interested in their reasoning. What fascinates me are the various perspectives. All are looking in completely different directions!
The risk is virtually zero. You'd need a ragingly infected stock boy to recently smear gobs of snot all over the item you bought, and for your fingers (unwashed and un-disinfected) to pick up sufficient viral load AND transfer that load directly into your nasal cavity (didn't your mom teach you not to pick your nose?). And even then infection isn't assured, nor are symptoms inevitable if infection does arise. So it's more like your phone bursting into flames from updating apps while updating OS. Theoretically possible, but not a pragmatic risk.
Novice views from the baseline perspective of "me and my cool but unfathomable device," with a hazy expectation that it will always work.
Average User views from the perspective of distrust. Bad experiences with technology have instilled a visceral unwillingness to refrain from getting "fancy". A burnt hand forever recoils from hot stoves.
Power User views from a high-level perspective.
Engineer views from a low-level perspective.
CEO Type views from a managerial perspective, broadly scanning the horizon - all component factors - to identify likely SNAFUs and assign a risk level. Focus is on the potential for individual minor human failures to aggregate, creating chaos....while avoiding the engineer/scientist's professional fascination with pragmatically irrelevant edge-case scenarios.
That last more nuanced style of consideration involves myriad agile reframings of perspective, whereas the novice, average user, power user, and engineer remain mostly fixed in their perspectives.
A lithe perspective staves off addiction, depression (also this), and can even save your life. But it also allows you to view the world more holistically by nimbly swiping through a multiplicity of framings impacting a given situation.
This facility underpins my chowhounding prowess. While others stand before restaurant windows poring over the menu, or querying Yelp for ratings, or hustling departing diners for their assessment, I'm less specifically immersed, considering the evidence and mentally swiping through jillions of micro-decisions (of design, of branding, of lighting, of pacing, etc.) by the forces behind the operation. I'm sensitively probing their perspective in order to gauge my risk level in venturing in for a bite!
Saturday, June 27, 2020
The Rise of Pseudo-AI
I just read an article from back in 2018 that reports that much supposed AI is fake, because "it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans."
AI is expensive and it's hard, and in many cases (especially, though not exclusively for new start-ups), it's easier to have zillions of tiny people in your computer typing really fast to make it look like the computer's working automatic magic. Those people, more often than not, are drawn from the vast hordes working, for micropayments, within Amazon's Mechanical Turk set-up (here's the 18th(!!) century origin of the name, and here's more on Amazon's operation).
Now we say lots of personal stuff in front of mechanical turk workers, whether disguised as AI or not. They don't seem to fully count. Really, that’s the new servant class.
AI is expensive and it's hard, and in many cases (especially, though not exclusively for new start-ups), it's easier to have zillions of tiny people in your computer typing really fast to make it look like the computer's working automatic magic. Those people, more often than not, are drawn from the vast hordes working, for micropayments, within Amazon's Mechanical Turk set-up (here's the 18th(!!) century origin of the name, and here's more on Amazon's operation).
“In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.It used to be that people would have all sorts of highly personal conversations in front of "the servants." They were like a lower form of life, so, somehow, they didn’t count.
"I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”
Now we say lots of personal stuff in front of mechanical turk workers, whether disguised as AI or not. They don't seem to fully count. Really, that’s the new servant class.
Sunday, June 14, 2020
Fonts and Type Faces for Writing
This isn't ultimately Mac-only...if you're a PC dupe user, just skip the first few paragraphs.
The latest MacOS has all these cool fonts built-in. It's like someone spent $$$ on font licenses for you. Strangely Apple didn't announce this, and you need to go to extra trouble to load them. Quick explainer here.
So I went to the trouble, and loaded Domaine, Produkt, Canela, and Proxima Nova. While I was at it, I got some advice on good writer's fonts in this Reddit thread (here’s a polished survey of actual writers). Garamond isn't available on Mac, but the others are, if you search for them (google: [font name] for Mac).
So I created a new Collection in my Font Book app, called Writing, and stocked it with candidates.
Note that this is about fonts for writing, not for printing or publication. If you're still caught in the 1980's word processing model of composing in whatever font and app will represent the final output, see the "Stop Using Word Processors" section of this posting to unchain yourself from unnecessary constraint.
The Optima I've always used seems perfect. You'll say that's 'cuz I'm used to it. But I honestly don't think so.
Optima, FYI, is Latin for "best". It's right there in the name.
Followup for PC users (and further discussion of the obsolescence of word processors) here.
The latest MacOS has all these cool fonts built-in. It's like someone spent $$$ on font licenses for you. Strangely Apple didn't announce this, and you need to go to extra trouble to load them. Quick explainer here.
So I went to the trouble, and loaded Domaine, Produkt, Canela, and Proxima Nova. While I was at it, I got some advice on good writer's fonts in this Reddit thread (here’s a polished survey of actual writers). Garamond isn't available on Mac, but the others are, if you search for them (google: [font name] for Mac).
So I created a new Collection in my Font Book app, called Writing, and stocked it with candidates.
Note that this is about fonts for writing, not for printing or publication. If you're still caught in the 1980's word processing model of composing in whatever font and app will represent the final output, see the "Stop Using Word Processors" section of this posting to unchain yourself from unnecessary constraint.
- Arial
- Avenir
- Baskerville
- Bembo
- Bodoni
- Calibri
- Cochin
- Courier Prime
- Georgia
- and Times New Roman
Avenir: Squat and slightly wispy.I also tried Domaine, Produkt, Canela, and Proxima Nova. Like the above, these are all exquisite-looking fonts, and would be nice to read in, and to use for certain end results. But not to write in.
Arial: The store brand version of Avenir.
Baskerville: Busy and dense.
Bembo: Slightly less busy and dense, but still pretty busy and dense.
Bodoni: Exactly the same as Bembo; what's up with that?
Calibri: Is screen space so expensive that everything needs to be squashed together horizontally?
Cochin: Are you trying to give yourself a migraine?
Courier Prime: Just no (unless you're a screenwriter).
Georgia: Spiders nesting in your monitor.
NewTimes Roman: Comfortingly familiar but would you really want to land your cursor between those intricate characters hundreds or thousands of times per day?
The Optima I've always used seems perfect. You'll say that's 'cuz I'm used to it. But I honestly don't think so.
Optima, FYI, is Latin for "best". It's right there in the name.
Followup for PC users (and further discussion of the obsolescence of word processors) here.
Thursday, February 13, 2020
Turing Test Postscript
My recent posting Artificial Intelligence, Turing Tests, Art, and Reframing concluded with this:
The dismaying question is this: should I try to think of a better Turing test strategy - one that never rejects humans - or is this one right after all because most humans allow themselves to become as flat and square and inflexible and blinkered and machine-like as machines and therefore shouldn't pass?
How lithe is Margaret Dumont's reframing ability in this clip?
The secret to effective Turing testing is to ply agile reframing and see whether the AI keeps up. But it won't. This will always be its limitation.I was being glib, and wound up saying something dumb.
It's hard to decide when a thought is mature (much of what I post here on the Slog has been incubating for 40 years or more). When an insight seems fully-baked, even after years of digestion, a greater clarity may appear in the very next microsecond, leaving you aware of how ridiculously incomplete or wrong you'd been.Well, half-dumb, anyway. Challenging an AI to reframe is indeed the smart way to trip up an AI. The problem is that most people have frozen perspectives. Having forgotten that they have the ability to reframe, they'll fail the test, as well. My Turing test strategy rejects all but the most creative people.
It's not like waiting for batter to become cake. It's all eternally batter, making it madness to ever write anything down (the Hindu vedas have a story about this; see the indented portion here).
The dismaying question is this: should I try to think of a better Turing test strategy - one that never rejects humans - or is this one right after all because most humans allow themselves to become as flat and square and inflexible and blinkered and machine-like as machines and therefore shouldn't pass?
How lithe is Margaret Dumont's reframing ability in this clip?
Labels:
creativity,
Meta Slog,
perceptual framing,
science,
Tech,
writing
Monday, February 10, 2020
Artificial Intelligence, Turing Tests, Art, and Reframing
Art is any human creation devised to induce a reframing of perspective. (more definitions)
Computers can't freely reframe perspective (though they can be programmed to reframe in canned, finite ways). They're essentially stuck. That's what conveys the sense that computers are basically "dumb" - which we all recognize despite their prodigious calculative prowess. They're not hip. They can't get the joke. They're married to the page. They are, in other words, machine-like, and there's no higher threshold of yet-more-awesome-calculative-prowess that will allow them to transcend that (Kurzweil's "Singularity" be damned). Your bookworm friend who can't get a date won't improve his results by reading another 200 books.
Computers can't freely reframe perspective (though they can be programmed to reframe in canned, finite ways). They're essentially stuck. That's what conveys the sense that computers are basically "dumb" - which we all recognize despite their prodigious calculative prowess. They're not hip. They can't get the joke. They're married to the page. They are, in other words, machine-like, and there's no higher threshold of yet-more-awesome-calculative-prowess that will allow them to transcend that (Kurzweil's "Singularity" be damned). Your bookworm friend who can't get a date won't improve his results by reading another 200 books.
Maslow's hammer can only do so much, even if it's an outstandingly great hammer. The problem is that framing is a whole other faculty, completely unrelated to calculation. So while computers can certainly simulate artwork (someone has even built a writing bot), and even lousy art may spur a person to reframe, reframing is not something a computer can register, much less devise and anticipate. That's the part a computer is missing.
(Who, exactly, frames?)
AI will only pass Turing tests via clever pre-programmed chicanery. The secret to effective Turing testing is to ply agile reframing and see whether the AI keeps up. But it won't. This will always be its limitation.
AI will only pass Turing tests via clever pre-programmed chicanery. The secret to effective Turing testing is to ply agile reframing and see whether the AI keeps up. But it won't. This will always be its limitation.
Labels:
art,
creativity,
perceptual framing,
science,
Tech
Sunday, February 9, 2020
The New Illiteracy
I belong to a an online group for my block. We're not a very up-market neighborhood, but, still, I'd guess >75% are college-educated. Yet it's stunning how bad people are at the basic tasks of forming a coherent posting and knowing exactly what usefulness to expect from an online forum.
For example, someone just posted:
Such people blurrily shrug it off by saying they're "not good at computer stuff”. And if it's a big blurry blob of "computer stuff", there's no hope of fixing it. In fact, the blobbiness is the problem. It feels like an enormous mountain of ignorance built from every tech situation that's ever frustrated them. From reinstalling system software to changing desktop background to creating macros and spreadsheets, all that stuff gets globbed along with the far simpler and more necessary basics.
You don't need to be a speed demon with the mouse or know how to set up a peripheral daisychain, but you do need to effectively web search, choose appropriate online tools, and type out a couple sentences in under 15 minutes. And here's the key concept people don't get: These are not computer tasks. They're more like home appliance tasks, like punching in a phone number or adjusting a thermostat. There's no soldering required. It doesn’t take a "hacker".
Media doesn’t help. They still categorize stories about YouTube videos or Twitter trends as “tech”. We need to finally decouple basic tasks accomplished with a computing device from tech/computation. By 1950, telephone users felt no need to understand, say, switches and trunk lines. Web searches, online tools, and typing are more involved than using a telephone, but they are still appliance tasks, not tech tasks...though lots of people don't realize this.
For example, someone just posted:
Does any one know of a CPAP supply store that is open on Sunday? My mother is visiting and our puppy chewed part of her mask. We’re looking for replacement or alternative parts.There are occasions for web searching, and occasions for posting in an online forum. This would be a web search moment. I was joking with a local friend about it:
Uh, yeah, my kid’s heart stopped beating. Does anyone know like a pediatrician or whatever? Really, I guess any kind of doctor, even, heh, a vet, at this point could potentially be helpful. Standing by.But it's a serious thing, even at a more basic level.
- Many people don’t know how to choose an online tool for a given task.
- Many people can't devise effective search terms for web searching
- Many people can’t type worth a damn.
Such people blurrily shrug it off by saying they're "not good at computer stuff”. And if it's a big blurry blob of "computer stuff", there's no hope of fixing it. In fact, the blobbiness is the problem. It feels like an enormous mountain of ignorance built from every tech situation that's ever frustrated them. From reinstalling system software to changing desktop background to creating macros and spreadsheets, all that stuff gets globbed along with the far simpler and more necessary basics.
You don't need to be a speed demon with the mouse or know how to set up a peripheral daisychain, but you do need to effectively web search, choose appropriate online tools, and type out a couple sentences in under 15 minutes. And here's the key concept people don't get: These are not computer tasks. They're more like home appliance tasks, like punching in a phone number or adjusting a thermostat. There's no soldering required. It doesn’t take a "hacker".
Media doesn’t help. They still categorize stories about YouTube videos or Twitter trends as “tech”. We need to finally decouple basic tasks accomplished with a computing device from tech/computation. By 1950, telephone users felt no need to understand, say, switches and trunk lines. Web searches, online tools, and typing are more involved than using a telephone, but they are still appliance tasks, not tech tasks...though lots of people don't realize this.
Friday, January 3, 2020
Cold Storage in the Cloud
I've got 500GB of crappy useless data on an external drive, just as you almost surely do. Let's see, there's email from the 1990s, music and videos I deliberately winnowed from my active media files years ago, early versions of iPhone apps, disk images of CD-Roms (remember them?), ancient backups of Desktop, Documents and iTunes Media files, etc. etc.
It's crap, and it lives on a wheezing external drive that's three years past its natural life span. It's only a matter of time before the drive starts doing the click of death, auguring the final demise of this data. Which would be okay, I suppose, but why let data disappear when storage is cheap? This America, baby!
It's certainly not worth buying a second hard drive, so thoughts turn heavenward, to The Cloud. "Off-site" back-up is always smart (in case of fire or theft here at The Hovel), and, anyway, the last thing I need is yet another whirling USB drive next to my desk spewing heat and contributing to my power cord carbonara.
And it dawns on me that virtually every computer user over the age of 40 is probably in the exact same predicament: glancing worriedly at a wheezing aged hard drive containing unimportant files, and scheming about storing them in the cloud for pennies. I feel un-alone.
Yet, not for the first time, Adam Smith let me down. The invisible hand of the market has provided no well-trodden path. It's still wild west out there for cloud storage. Having taken the deep dive, I'll share what I've learned.
The main problem is that the tech industry has decided, as usual, that consumers want to do dumb expensive stuff and geeks want to do smart cheap stuff....and what I want falls smack in the middle.
If I wanted a friendly, intrusive backup program constantly synching - i.e. serving as a sort of Time Machine (Apple's backup protocol) to the Cloud, a thousand companies will eagerly take my money (there's consensus that Backblaze Unlimited Backup - "The World's Easiest Cloud Backup", is one of the better options).
I don't want that. I want to park a 500GB and forget it. For cheap.
There's Dropbox, but they have a maximum file size of 50 GB using their web site and 350GB using their API (i.e. various hook-in services). And, for privacy, I intend to create a 500GB encrypted disk image and upload it as a single lump. Yes, this would be unwieldy to grab files from, but, again, this is just bulk storage, so I won't be grabbing much, if ever.
We're still in the consumer range, more or less. Plenty of services will stow this beast for $10-$15/month plus a penny or two per GB to download. But I don't want to pay $180/year for redundant offsite back up of utter garbage. And that's what pulls me out of the realm of "consumers want to do dumb expensive stuff" and firmly into "geeks want to do smart cheap stuff," where the waters are choppy and poorly lit.
I used the term "bulk storage" to describe my vast lump of garbage data, but the industry term is "cold storage". And the coldest of cold storage has long been Amazon Web Service's "Glacier" storage. However they've recently introduced something colder still, which they call "Glacier Deep Archive". This lets you park 500GB for just 50 freaking cents per month. But they're not exaggerating about the breath-freezing coldness. You must give them 12 hours notice if you ever want to download the data, and the download costs a steep 9¢/GB (they call it an "egress charge"). Which is probably ok because, like I said, this data is super unnecessary and I'm just being a pack rat. Amazon knows that, and this service is for people in exactly my situation (ok, and, of course, IT managers who want to migrate from tape backups). I could store my garbagey lump for virtually free, and pay a decent amount in the unlikely chance I ever actually need it.
Problem: Amazon Web Services is maddeningly technical; the ultimate example of geeks doing smart cheap stuff. Same for Google Cloud Service and BackBlaze's professional product, B2 (which offers a nice simple web interface for files under 50GB, but with large files you're forced deeply into UNIX/Terminal territory - why they don't simply create an AppleScript to handle the rote tasks is beyond me).
Geekiness aside, B2 may be the pricing sweet spot: it's five times the storage cost of Glacier Deep Archive (i.e. $3/month for 500GB) but 1/10th the download charge (a penny per GB). Do bear in mind, though, that while Amazon Web Services and Google Cloud will almost surely be here 10-15 years from now, BackBlaze might...but I'm less certain. OTOH, the impenetrable geekery makes it moot.
One observation. Since I want to park a single 500GB blog and likely never do anything with it, and Amazon Web Service Frigid Frostbite MoFo is insanely cheap, it would make sense to hire a geek to walk me through the process. So that might be a smart solution right there.
Here's what I've decided. I'm going to use ARQ Backup software (Mac or PC) costing a one-time $50. It acts as a friendly, polished front end to all the major cloud services, including DropBox, which is a very nice plus (the DropBox app is super inflexible these days). ARQ is very actively developed, but reportedly quite processor intense - don't expect to do much more with your computer while the app is running. Here’s an in-depth MacWorld review of ARQ Backup from 2017 (which also sheds light on cloud backup, generally).
ARQ doesn’t appear to handle Glacier Deep Freeze yet, but I’ll bet it soon will. Meanwhile it does work with normal AWS Glacier, if you want still pretty crazy-cheap storage with costly downloads.
But I'll hook ARQ Backup up to B2. And here's the thing: having gone this far to find viable cheap cold storage, it looks like ARQ is so easy and powerful that I might want to use it for less hypothermic synch/backup/storage as well. I own a couple more drives with slightly more essential data, so maybe I'll sign up for a couple TBs from B2 for extra redundancy. I'll need to take a close look at privacy/security before I use their encryption rather than encrypting on my side (the latter requires the one-big-lump-of-data approach, which is less viable with data I might need more flexible access to). Although...hmm...as redundant backup (I will also carefully maintain it on my external drives), I may go big-encrypted-lump with these, as well, and swap it out with an updated version bimonthly.
Potential point of confusion: just as there's the more famous BackBlaze consumer product as well as the geeky BackBlaze B2 discussed here, ARQ seems to make most of their dough selling storage to consumers. I'm not talking about ARQ's storage/backup monthly plans above (which fall under my description of the myriad relatively expensive and smart consumer-side offerings). I'm talking specifically about the ARQ Backup app.
It's crap, and it lives on a wheezing external drive that's three years past its natural life span. It's only a matter of time before the drive starts doing the click of death, auguring the final demise of this data. Which would be okay, I suppose, but why let data disappear when storage is cheap? This America, baby!
It's certainly not worth buying a second hard drive, so thoughts turn heavenward, to The Cloud. "Off-site" back-up is always smart (in case of fire or theft here at The Hovel), and, anyway, the last thing I need is yet another whirling USB drive next to my desk spewing heat and contributing to my power cord carbonara.
And it dawns on me that virtually every computer user over the age of 40 is probably in the exact same predicament: glancing worriedly at a wheezing aged hard drive containing unimportant files, and scheming about storing them in the cloud for pennies. I feel un-alone.
Yet, not for the first time, Adam Smith let me down. The invisible hand of the market has provided no well-trodden path. It's still wild west out there for cloud storage. Having taken the deep dive, I'll share what I've learned.
The main problem is that the tech industry has decided, as usual, that consumers want to do dumb expensive stuff and geeks want to do smart cheap stuff....and what I want falls smack in the middle.
If I wanted a friendly, intrusive backup program constantly synching - i.e. serving as a sort of Time Machine (Apple's backup protocol) to the Cloud, a thousand companies will eagerly take my money (there's consensus that Backblaze Unlimited Backup - "The World's Easiest Cloud Backup", is one of the better options).
I don't want that. I want to park a 500GB and forget it. For cheap.
There's Dropbox, but they have a maximum file size of 50 GB using their web site and 350GB using their API (i.e. various hook-in services). And, for privacy, I intend to create a 500GB encrypted disk image and upload it as a single lump. Yes, this would be unwieldy to grab files from, but, again, this is just bulk storage, so I won't be grabbing much, if ever.
We're still in the consumer range, more or less. Plenty of services will stow this beast for $10-$15/month plus a penny or two per GB to download. But I don't want to pay $180/year for redundant offsite back up of utter garbage. And that's what pulls me out of the realm of "consumers want to do dumb expensive stuff" and firmly into "geeks want to do smart cheap stuff," where the waters are choppy and poorly lit.
I used the term "bulk storage" to describe my vast lump of garbage data, but the industry term is "cold storage". And the coldest of cold storage has long been Amazon Web Service's "Glacier" storage. However they've recently introduced something colder still, which they call "Glacier Deep Archive". This lets you park 500GB for just 50 freaking cents per month. But they're not exaggerating about the breath-freezing coldness. You must give them 12 hours notice if you ever want to download the data, and the download costs a steep 9¢/GB (they call it an "egress charge"). Which is probably ok because, like I said, this data is super unnecessary and I'm just being a pack rat. Amazon knows that, and this service is for people in exactly my situation (ok, and, of course, IT managers who want to migrate from tape backups). I could store my garbagey lump for virtually free, and pay a decent amount in the unlikely chance I ever actually need it.
Problem: Amazon Web Services is maddeningly technical; the ultimate example of geeks doing smart cheap stuff. Same for Google Cloud Service and BackBlaze's professional product, B2 (which offers a nice simple web interface for files under 50GB, but with large files you're forced deeply into UNIX/Terminal territory - why they don't simply create an AppleScript to handle the rote tasks is beyond me).
Geekiness aside, B2 may be the pricing sweet spot: it's five times the storage cost of Glacier Deep Archive (i.e. $3/month for 500GB) but 1/10th the download charge (a penny per GB). Do bear in mind, though, that while Amazon Web Services and Google Cloud will almost surely be here 10-15 years from now, BackBlaze might...but I'm less certain. OTOH, the impenetrable geekery makes it moot.
One observation. Since I want to park a single 500GB blog and likely never do anything with it, and Amazon Web Service Frigid Frostbite MoFo is insanely cheap, it would make sense to hire a geek to walk me through the process. So that might be a smart solution right there.
Here's what I've decided. I'm going to use ARQ Backup software (Mac or PC) costing a one-time $50. It acts as a friendly, polished front end to all the major cloud services, including DropBox, which is a very nice plus (the DropBox app is super inflexible these days). ARQ is very actively developed, but reportedly quite processor intense - don't expect to do much more with your computer while the app is running. Here’s an in-depth MacWorld review of ARQ Backup from 2017 (which also sheds light on cloud backup, generally).
ARQ doesn’t appear to handle Glacier Deep Freeze yet, but I’ll bet it soon will. Meanwhile it does work with normal AWS Glacier, if you want still pretty crazy-cheap storage with costly downloads.
But I'll hook ARQ Backup up to B2. And here's the thing: having gone this far to find viable cheap cold storage, it looks like ARQ is so easy and powerful that I might want to use it for less hypothermic synch/backup/storage as well. I own a couple more drives with slightly more essential data, so maybe I'll sign up for a couple TBs from B2 for extra redundancy. I'll need to take a close look at privacy/security before I use their encryption rather than encrypting on my side (the latter requires the one-big-lump-of-data approach, which is less viable with data I might need more flexible access to). Although...hmm...as redundant backup (I will also carefully maintain it on my external drives), I may go big-encrypted-lump with these, as well, and swap it out with an updated version bimonthly.
Potential point of confusion: just as there's the more famous BackBlaze consumer product as well as the geeky BackBlaze B2 discussed here, ARQ seems to make most of their dough selling storage to consumers. I'm not talking about ARQ's storage/backup monthly plans above (which fall under my description of the myriad relatively expensive and smart consumer-side offerings). I'm talking specifically about the ARQ Backup app.
Thursday, December 5, 2019
Slog Assist
The Slog's back end needed some adjustment after 11 years, so I turned to a company called Confluent Forms. Its proprietor, David Wil-Alon Kutcher, is on the Expert team for Google/Blogger's support forum, so he's got this stuff down cold. Confluent Forms does web development, branding, digital marketing and more - including advice/coordination for issues like web hosting, app development, etc.
Jillions of brash kids and dodgy offshore characters purport to do the same (I've been down that road too many times with Chowhound, my smartphone app, and sundry tech schemes), but David's a solid, experienced person who actually knows stuff. Also, his turnaround (even though I was far from his top priority) is impressive; David consistently replies more quickly than I'm able to process. I'm the slow gear. Highly recommended!
Also highly recommended: friend-of-the-Slog Paul Trapani of LISTnet has been providing ongoing tech help and advice all these years. Paul's skilled in a number of tech and biz areas, and if he can't fix a problem, he knows who can. LISTnet is a network for tech/biz needs, with solid local presence in Long Island (LISTnet = Long Island Software & Technology Network).
Jillions of brash kids and dodgy offshore characters purport to do the same (I've been down that road too many times with Chowhound, my smartphone app, and sundry tech schemes), but David's a solid, experienced person who actually knows stuff. Also, his turnaround (even though I was far from his top priority) is impressive; David consistently replies more quickly than I'm able to process. I'm the slow gear. Highly recommended!
Also highly recommended: friend-of-the-Slog Paul Trapani of LISTnet has been providing ongoing tech help and advice all these years. Paul's skilled in a number of tech and biz areas, and if he can't fix a problem, he knows who can. LISTnet is a network for tech/biz needs, with solid local presence in Long Island (LISTnet = Long Island Software & Technology Network).
Saturday, November 30, 2019
Feature Creature
There is nothing more frustrating on god's green earth than trying to use a powerful application to accomplish a simple task.
Subscribe to:
Posts (Atom)