Indiana native. Purdue grad. Programmer / Dev Ops in trade. Dog owner. Husband and father. Have questions? Ask!
402 stories

I want it all but, it is impossible


I want to read all the newsletters I subscribed to.

I want to read everything in the ten web dev community slacks I joined.

I want to read everything on the seven Discord servers I am on.

I want to read every blog post in the hundreds of feeds I follow.

I want to read every toot of the people I follow on Mastodon.

I want to find more people to follow.

I want to ensure I didn’t miss your cool and happy announcement.

I want to attend your conference/meet-up virtually.

I want to watch your livestream and support you.

I want to view the videos in the archive of this conference that I missed in person.

I want to listen to that audiobook that I’ve had for months.

I want to listen to your podcast.

But I need to work and live life outside of consumption and entertainment.

And then worry that I am behind.

Then, I worry that I must apply the latest knowledge and wisdom, but I probably have missed it.

And worry that pals could think that I ignored them.

I want it all, but it is impossible.

Read the whole story
11 days ago
Central Indiana
Share this story

Please Avoid Memorizing These Unhelpful Mnemonic Devices

1 Comment and 3 Shares

“Red touch yellow, legless fellow. Red touch black, legs they lack.”
Remembering that neither coral snakes nor scarlet kingsnakes have legs.

“Uplifting. Star-spangled. Anthem.”
Remembering the letters in “USA.”

“A caT has two. A dOg has one.”
How many horns common household pets would have if those household pets had horns, and also if cats had two of them while dogs only had one.

“Red touch yellow, kill a fellow. The largest nation, Russian Federation.”
Distinguishing between a coral snake and the country of Russia.

“An airplane takes you up to a different plane. A submarine goes in the water.”
Determining whether a vehicle is an airplane or a submarine.

“ER = Eating Rounds. ING = Inside, Normally Garments.”
Remembering whether plates go in a dishwashER or a washING machine.

“Red sky in the morn, a day is born. Red sky at night, a day takes flight.”
Distinguishing between sunrise and sunset.

“fLoors are Lower.”
Telling the difference between a roof and a floor.

“Radical scientists invent time machine so they won’t read spoilers about next show death.”
Recalling the words to the mnemonic, “Red sky in the morning, sailors take warning. Red sky at night, sailor’s delight.”

“Points are three, then feel free. Point is one, flee and shun.”
Remembering which end of a fork to bite down on.

“Big cats terrify antelopes.”
Remembering the standard aging process of baby to child to teenager to adult.

“All tigers can bite.”
Remembering the aging process for literary character Benjamin Button.

“A terrible cardiologist thoughtlessly told Terence to taste thirty thermometers.”
Remembering the aging process for literary character Benjamin Button if his reversed aging was fixed partway through the story, but then he got caught in a time loop and kept living his teenage years over again.

“Red touch gray, get away. Blue touch green, be serene.”
Distinguishing between coral snakes and scarlet kingsnakes if they had different colors, but also one of them was still poisonous and the other wasn’t.

“All good boys deserve fudge.”
Remembering the notes on the lines of the treble clef musical staff slightly wrong.

“See from their view? Then that’s you. See from afar? Someone else they are.”
Determining whether someone is yourself or a different person.

“Ripping up a rare artwork, Edward realizes Friday Raphael paintwork biting afternoons aren’t really acceptable socially.”
The first word of every previous mnemonic in this list.

Read the whole story
20 days ago
Central Indiana
Share this story
1 public comment
45 days ago
so helpful!
seattle, wa


1 Share
I hadn't planned to write about more C2PA problems so soon, but my last few blog entries on C2PA's problems has struck a chord with readers and some of their feedback is very time critical. (This time sensitivity is ironic since my last blog pointed out C2PA's problems with timestamps.)


The first feedback I received mentioned a recent article (4-March-2024) at IEEE Spectrum. IEEE's David Evan Harris and Lawrence Norden reviewed Meta's proposed solution to AI-generated media. The article's title nailed the problem (their bold for emphasis):

Meta's AI Watermarking Plan Is Flimsy, at Best
Watermarks are too easy to remove to offer any protection against disinformation

The IEEE article references Meta's recent decision to label AI-generated images using visible watermarks, invisible watermarks, and metadata based on C2PA. Then it details the solution's core problems:
The most obvious weakness is that Meta’s system will work only if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Most unsecured “open-source” generative AI tools don’t produce watermarks at all.


We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Given that it takes about 2 seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

We know this because we were able to easily remove the watermarks Meta claims it will detect—and neither of us is an engineer. Nor did we have to write a single line of code or install any software.

It's good to see that I'm not the only person who is realizing C2PA's ineffectiveness.


I have been repeatedly mentioning how my FotoForensics service is seeing altered pictures with C2PA metadata being used for fake IDs, fake medical documents, and other types of forgeries. This recent comment from a user called "bots don't cry" details the BBC's "verification" of a forged video. This person wrote:
BBC is the first large media to have implemented #C2PA for one of their video, available at

The video is a montage of 2 social media videos with a map and added text and logo overlay, done by the fact checking unit of BBC: BBC verify

Key finding

BBC verify went through 3 different manifests for the same video.
One of the video, authenticated as "Genuine" is "Fake"

manifest 1, published 2024-03-04T15:02:32.337Z
1: No Ingredient assertions, instead a CBOR:ReviewBody tag describing how they authenticated the videos (OSINT), and a link to the original url of one of the video
2: The info box about the C2PA metadata under the video mirror the CBOR:ReviewBody, but is heavily redacted.

manifest 2, published 2024-03-04T18:03:59.565Z
The info box about the C2PA metadata has changed, I check the video, and it has been republished
1: The video has a new manifest, doesn't include any reference to manifest 1
2: No Ingredient assertions
3: The manifest 1 CBOR:ReviewBody is stripped out from all urls
4: The info box about the C2PA metadata under the video is now identical to the CBOR:ReviewBody

manifest 3, published 2024-03-05T11:25:10.796Z
The video file is moved to a CDN server (The video was originally on an AKAMAI CDN for a BBC R&D metadata api)
1: The video has a new manifest, doesn't include any reference to manifest 1 or 2
2: It's manifest 2 with a new signature

Fake video finding, 2024-03-06 in the morning
I found out that one of the 2 social media video was edited with an added soundtrack (Loud Gunshots) to create more dramatic effects, before being published to the social media.
It's easy, The social media (TikTok) has a disclaimer about the added soundtrack, but the fact checkers didn't bother.
Just listening to the audio of the video feels weird, a simple display of the audio waveform show a repetitive 15s in length sample (The gunshots).

Thanks for your detailed review about C2PA!

Tracking Sources

The news article that "bots don't cry" references is at the BBC, titled "Haiti violence: Haiti gangs demand PM resign after mass jailbreak". The story is dated "3 days ago". (As I write this, it is March 7th, so that would be March 4th.)

Halfway down the article is the video labeled, "Verified video of people fleeing Haiti violence." On the video itself are two different notations saying it has been verified.

Below the video is a section that shows "How we verified this". This is where the BBC details the results from Content Credentials and their own verification findings.

The BBC's description of the source material should be a huge red flag. They wrote, "The video from the cemetery was first uploaded to TikTok on Saturday 2 March at 1750G (1250 local)."
  • Where is this source TikTok video for independent validation? It's not mentioned in the article and not in the C2PA metadata from the BBC video that I downloaded. Moreover, the C2PA assertions in the video I downloaded from the BBC differ from the information provided by "bots don't cry". It's very likely that either (A) the BBC revised their video after "bots don't cry" retrieved it, or (B) the BBC provides different versions of the video based on your browser, bandwidth, or other factors. I'm only able to evaluate the version that I could download.

    In this case, I got lucky because "bots don't cry" had a version of the BBC's video with additional information and noticed that the TikTok video splice listed the username as "usermaloulou557". The username led to the source video URL.

  • Who uploaded it to TikTok? A user named "usermaloulou557". Who is this person? Where is he located? Did anyone vet him or the content he posts? This isn't a self-proclaimed "citizen journalist" -- this is just someone on TikTok.

  • Although it was uploaded to TikTok on March 2, we don't know when the video was created.
Ironically, the BBC wrote an article in 2022 about the spread of fake videos on TikTok. They wrote, "False TikTok videos draw millions of views" and "TikTok has emerged as one of the leading platforms for snappy false videos". Although the 2022 article was about false Ukrainian videos, it still holds true about other regions. I'm actually stunned that the BBC would use a poorly referenced TikTok video as an authoritative source.

But it gets worse! The BBC's verification process also mentioned a second video: "The video of the police station is from a Facebook post published on Sunday evening 3 March. We found no earlier versions of this video online." An unreferenced and anonymous Facebook video. While it was posted on Facebook on March 3rd, they didn't identify when it was recorded. Just because they couldn't find an earlier version online doesn't mean it is authentic or timely. Moreover, there is no link to the source video so it cannot be independently validated. The article at IEEE noted that Meta is flagging some content on Facebook with C2PA information. Did the unspecified video at Facebook have any C2PA-based vetting?

The BBC verification box includes a "View more" link at the bottom. Clicking it shows you the results at the Content Credentials web site.

According to Content Credentials, everything in the video appears legitimate. The cryptographic signatures are valid and signed by the BBC, and the notations about the dimension changes and publication are minimally consistent with the BBC's own notes: "Superficial edits were made to this content to improve technical quality, in line with editorial guidelines." (Personally, I think splicing a TikTok video to a Facebook video is more than "superficial edits", but they can interpret their own editorial guidelines.)

The Content Credentials results doesn't mean the video content is legitimate. It only means that the BBC has a claim that the BBC's edited version of the BBC's video came from the BBC. But they could have easily done this with a simple IPTC or XMP metadata attribution field; C2PA doesn't add any additional value to this vetting process. (But it sure looks impressive to see the word "Verified" labeled twice on the picture and three times below it! It's like screaming "No really! It's legit!" over and over so people will believe it.)

Diving into Details

I grabbed a copy of the BBC's video that they listed as verified. If you are sensitive to this kind of content, then beware: There are the sounds of gunfire and the video shows people running (and walking), but no blood or gore.

To analyze this video, I used my Hintfo service to view the metadata. The main things I noticed:
  • The video is annotated by the BBC. They added red and blue Verified banners, a mention of sources (TikTok and Facebook), and occasional text to the video. They also spliced in maps. The BBC's final composite is not the original video. The source TikTok video does not have these annotations.

  • There are a wide range of timestamps that span days:

    • The XMP metadata (stuffed in the C2PA data structure) says it is from Mar 02, 2024 and attributed to TikTok. However, there is no time provided.

    • The text description mentions a Facebook video from March 3rd, but there is no metadata timestamp referencing this content. The C2PA metadata does not have an "assertion" record for this video.

    • The Quicktime metadata says the video is from 2024-03-04 14:17:42. This has no attribution but corresponds with the day the article was published.

    • The C2PA metadata says it is from 2024-03-05 11:20:41 GMT and 2024-03-05 11:25:10 GMT. This is after the article was published and corresponds with the BBC verifying the video.

    • The trusted third-party authenticated timestamp is from 2024-03-05 11:25:45 GMT. This shows when they finished authenticating the video. It is unclear if the BBC originally published the unverified video. What is is clear: the BBC added in the verified video without revising the article's publication date.

  • The metadata includes two separate GPS coordinates. The first is 18.534108,-72.344307 and places the location in the center of the Grand Cemetery. I don't know the cemetery's layout or the GPS accuracy; this could be anywhere in the area. I suspect that the BBC selected an arbitrary location in the cemetery.

  • The second GPS record has enough decimal points to be precise to the subatomic level! Keep in mind, 'precise' and 'accurate' are not the same things. The coordinates identify the location as 18.73659357791219,-72.41756989951344. This is the location of the police station in the city of Cabaret that was spliced in from the unidentified Facebook video.

  • When playing the BBC's video, watch for very clear splices where the scene jumps and people suddenly appear. For example, at 10 seconds into the BBC's version of the video, a man in a white shirt suddenly appears and is moving past the camera. Even though the person suddenly appears, the audio does not have a corresponding splice.

  • The source video at TikTok has similar sharp splices with no interruption in the gunfire audio. (E.g., 20 seconds, 31 seconds, 36 seconds, and so on.) Here's the source video from TikTok with the obvious splices (watch for sudden jumps in the footage):

  • The source TikTok video lacks the sense of urgency that you would expected from nearby sustained automatic gunfire. For example:

    • At the end of the video, you can see people walking calmly out of the cemetery.

    • In the last few seconds, they pass a man in a black shirt who is carrying a bucket and heading in the opposite direction. The bucket man doesn't appear concerned.

  • Even though the photographer in the TikTok video is moving, the gunfire's volume never changes. If the photographer were actually moving away from (or toward) the gunfire, then the volume should change.

  • Although the TikTok video contains numerous visual splices, none of the splices are aligned with audio splices.
The user "bots don't cry" went one step further in this analysis, identifying the source audio at TikTok:

The audio just sounds like automatic gunfire. It is the same audio track from the BBC and source cemetery videos. The TikTok gunfire video is from December 2023. However, we cannot date this audio clip; it may be reused from some older audio recording.

The video used by the BBC is spliced from unvetted sources, and one of the sources was spliced and has a fake audio track. And yet, somehow the BBC "verified" this as being authentic. Then the BBC applied a C2PA signature to help support their false claim of authenticity.

Flimsy at Best

This kind of false validation is how I anticipated that C2PA and Content Credentials would be used. The entire C2PA model is built on "trust":
  • We trust that the metadata accurately reflects the content. This explicitly means trusting in the honesty of the person inserting the metadata.

  • We trust that each new signer verified the previous claims.

  • We trust that a signer didn't alter the previous claims.

  • We trust that the cryptographic certificate (cert) was issued by an authoritative source.

  • We trust that the metadata and cert represents an authoritative source. While the cert allows us to validate how it was issued ("trust but verify"), it doesn't validate who it was issued to or how it is used.

  • We trust the validation tools to perform a proper validation.

  • We trust that any bad actors who violate any of these trusts will be noticed before causing any significant damage.

  • And if there is a bad actor, we trust there is someone who notices, somewhere to report it, and someone who can do something about it. And we trust that this will happen quickly.
All of this trust is great for keeping honest people honest. However, it does nothing to deter someone with ulterior motives. As the scholars at IEEE noted, it is trivial to bypass C2PA detection. And with the BBC, we have untrusted content (obvious splices, fake audio, and no verifiable dates), unvetted metadata, and unverified prior claims wrapped in a cryptographic certificate from the BBC. (But the BBC said it was verified five times and in red and blue colors! So it must be legitimate!) As one of the C2PA founding members, it's disappointing that BBC would use this technology to justify a flawed and incomplete verification process.

C2PA doesn't authenticate, doesn't validate, and doesn't provide reliable provenance information. It can easily be used to create forgeries. With the BBC, we have now seen C2PA used by a media outlet to support an unproven verification claim. It's only a matter of time before other fields experience this new type of fraud introduced by C2PA: fraud that provides an unjustified and false sense of authenticity to the typical reader. (Not my readers, my readers totally get it!)

Huge thanks to Shawn and bots don't cry for these timely findings.
Read the whole story
73 days ago
Central Indiana
Share this story

Text Recognition Map Search

1 Share
The David Rumsey Map Collection has unveiled its new amazing Text on Maps feature which allows users to search one of the world's largest collections of digitized maps by text. The David Rumsey Map Collection contains over 57,000 geo-referenced vintage maps. Using the new Text on Maps feature you can now search for where any word appears on any of those 57,000 maps. Old maps are an invaluable
Read the whole story
289 days ago
Central Indiana
Share this story

Barbie Girl, in the Style of Six Classical Composers


This is fun: Aqua's pop hit Barbie Girl, redone in the style of six classical composers: Mozart, Beethoven, Schumann, Schubert, Chopin, and Ravel. (via @Erikmitk)

Tags: Aqua · Barbie · music · remix · video
Read the whole story
304 days ago
Central Indiana
Share this story

In macOS Sonoma, Touch ID for sudo can survive updates

1 Share

One of the great things about having a Mac with built-in biometric authentication is not having to constantly type in your password. It’s particularly nice for those of us that work in Terminal, where you can set up Touch ID to authenticate the sudo command that bestows administrative powers.

However there’s been one drawback to enabling that feature: because it means altering a system file, the change wouldn’t generally survive a system update—the file would get overwritten by the stock file every time macOS released a new version, meaning you’d have to go in and make the change again. I’m probably not alone in having given up on having Touch ID enabled, rather than playing the constant cat-and-mouse game.

But wait, there’s good news: in macOS Sonoma, Apple appears to have provided a new framework to work around this problem. As Mastodon user Rachel pointed out, Sonoma allows for an additional file that will persist through updates. So you can make the change once and it should stick.

From what I can tell, this system was put in place precisely for this feature. Apple provides a sudo_local.template file as an example, which not only contains a comment explaining that sudo_local will survive updates, but also even includes the code necessary to enable Touch ID.

So, without further adieu, here are the steps for enabling this feature in macOS Sonoma, once and for all:1

Open the Terminal app. Navigate to the directory that stores the authentication files by typing the following:

cd /etc/pam.d

Next, copy Apple’s provided template to the actual file that the system will read. You’ll need to use sudo and enter your administrator password to get permission:

sudo cp sudo_local.template sudo_local

Finally, open up the file you just made using your text editor of choice; I prefer pico.2 You’ll need to use sudo again here.

sudo pico sudo_local

In that file, navigate to the line that contains with and delete the hashtag (#) at the beginning. Save the file out by pressing Control-X, typing ‘Y’ to save your changes, and hitting Return.

That’s it; you’re done! We’ll have to wait and see if this truly works as described, but fingers crossed you should be able to keep Touch ID access for sudo for ever and ever.

  1. With the caveat that Sonoma is, of course, still in beta, and this could change upon the official release, as unlikely as that seems. 
  2. Miss me with your command-line text editor wars. 
Read the whole story
304 days ago
Central Indiana
Share this story
Next Page of Stories