Indiana native. Purdue grad. Programmer / Dev Ops in trade. Dog owner. Husband and father. Have questions? Ask!
404 stories
·
0 followers

Drabble by Kevin Fagan for Sun, 21 Jul 2024

1 Share

Drabble by Kevin Fagan on Sun, 21 Jul 2024

Source - Patreon

Read the whole story
RyanAdams
5 days ago
reply
Central Indiana
Share this story
Delete

The SQL Language is Just a Hurdle You Gotta Overcome.

2 Shares

I’m not talking just about Microsoft SQL Server specifically here, nor T-SQL. Let’s zoom out a little and think bigger picture for a second: is the SQL language itself a problem?

Sometimes when I talk to client developers, they gripe about the antiquated language.

The order of a SELECT statement doesn’t make any sense. You shouldn’t state what you’re looking for, before you even say where you wanna get the data from. The FROM should really go first so that query-completion tools like IntelliSense have a fighting chance to help you write the SELECT part. If we started writing our queries like this:

FROM dbo.Users u
INNER JOIN dbo.Posts p ON u.Id = p.OwnerUserId
SELECT ...

Then as you started typing stuff in the SELECT, you could actually get useful stuff out of IntelliSense. How many times have you started typing a query, and query completion tools start throwing all kinds of system functions at you? Idiotic.

Exception handling is a painful mess. Let’s be honest here: the majority of stored procedures and functions out there don’t have error handling. They YOLO their way through the data, hoping and praying that things are as we expect, we have the right permissions, structures haven’t changed, and the data is in a useful state. Everybody looks the other way and mumbles, “We’ll handle errors on the application side,” when in reality those errors are either thrown directly at the innocent user, or simply suppressed and not logged anywhere.

It’s not really a standard. Oh sure, SELECT/FROM/WHERE/ORDER BY works in most databases, but even trivially simple applications break if you try to port them from one database management system to another. Your skills transfer in a similar way: even if you’re great at T-SQL exception handling, you’re still gonna have to tweak the way you do it in Postgres. The concepts are standard, but the specifics are different.

Unit testing is a pipe dream. App code developers know if their code changes will break something. Database developers just punt their stuff into development, run the query a few times, nod because no errors get thrown, and then toss it into production. When code breaks weeks or months later, all we hear is, “Nothing’s been changed.”

So why haven’t we moved on past SQL?

In some ways, we have, with object-relational mapping (ORM) tools like Entity Framework, Hibernate, and Django. The database administrator readers here in the audience usually cringe when they hear those words, but the reality is that developers leverage those tools heavily to build new applications. I don’t blame them. I would too, for all the reasons I talked about above.

What those tools do is translate your desires into SQL, though, which brings us right back where we started. Often, the SQL they generate sucks for performance, thus the typical DBA’s feelings about ORMs. So why haven’t we got a new standard way for applications to talk directly to databases, in a secure, performant, and easy-to-write way?

It’s not for lack of trying: at least once every 6 months, I see a post on HackerNews about a better replacement for SQL. Someone puts a lot of thought into the problems, puts a lot of work into a replacement, and then proudly announces it.

And nobody uses it.

Because SQL is the lowest common denominator that works damn near everywhere, for values of “works.”

It works on the back end. Remember when NoSQL came out, and everybody was all “databases r doomd”? And remember what business users said when they wanted to run their reports? NoSQL persistence layers pretty quickly changed their tune, saying, “Oh, well, uh, we meant Not Only SQL, that’s what we meant,” as they struggled to quickly slap in SQL compatibility. Even MongoDB, king of NoSQL, implemented SQL support.

It works on the front end, especially the reporting front end, which is what managers care about. The people who sign the checks wanna see their data in Power BI and Excel. Every new reporting tool that comes out, in order to check boxes and say they’re compatible with every database, implements SQL support. Oh sure, these tools write horrific queries, but they check the box to say they can get data out of all your different persistence layers, and they do it with SQL first because it’s cheap and easy to support lots of databases that way.

I’ll leave you with an amusing quote from Bjarne Stroustrup:

There are only two kinds of languages: the ones people complain about and the ones nobody uses.

Read the whole story
RyanAdams
30 days ago
reply
Central Indiana
Share this story
Delete

I want it all but, it is impossible

3 Shares

I want to read all the newsletters I subscribed to.

I want to read everything in the ten web dev community slacks I joined.

I want to read everything on the seven Discord servers I am on.

I want to read every blog post in the hundreds of feeds I follow.

I want to read every toot of the people I follow on Mastodon.

I want to find more people to follow.

I want to ensure I didn’t miss your cool and happy announcement.

I want to attend your conference/meet-up virtually.

I want to watch your livestream and support you.

I want to view the videos in the archive of this conference that I missed in person.

I want to listen to that audiobook that I’ve had for months.

I want to listen to your podcast.

But I need to work and live life outside of consumption and entertainment.

And then worry that I am behind.

Then, I worry that I must apply the latest knowledge and wisdom, but I probably have missed it.

And worry that pals could think that I ignored them.

I want it all, but it is impossible.

Read the whole story
RyanAdams
49 days ago
reply
Central Indiana
Share this story
Delete

Please Avoid Memorizing These Unhelpful Mnemonic Devices

1 Comment and 3 Shares

“Red touch yellow, legless fellow. Red touch black, legs they lack.”
Remembering that neither coral snakes nor scarlet kingsnakes have legs.

“Uplifting. Star-spangled. Anthem.”
Remembering the letters in “USA.”

“A caT has two. A dOg has one.”
How many horns common household pets would have if those household pets had horns, and also if cats had two of them while dogs only had one.

“Red touch yellow, kill a fellow. The largest nation, Russian Federation.”
Distinguishing between a coral snake and the country of Russia.

“An airplane takes you up to a different plane. A submarine goes in the water.”
Determining whether a vehicle is an airplane or a submarine.

“ER = Eating Rounds. ING = Inside, Normally Garments.”
Remembering whether plates go in a dishwashER or a washING machine.

“Red sky in the morn, a day is born. Red sky at night, a day takes flight.”
Distinguishing between sunrise and sunset.

“fLoors are Lower.”
Telling the difference between a roof and a floor.

“Radical scientists invent time machine so they won’t read spoilers about next show death.”
Recalling the words to the mnemonic, “Red sky in the morning, sailors take warning. Red sky at night, sailor’s delight.”

“Points are three, then feel free. Point is one, flee and shun.”
Remembering which end of a fork to bite down on.

“Big cats terrify antelopes.”
Remembering the standard aging process of baby to child to teenager to adult.

“All tigers can bite.”
Remembering the aging process for literary character Benjamin Button.

“A terrible cardiologist thoughtlessly told Terence to taste thirty thermometers.”
Remembering the aging process for literary character Benjamin Button if his reversed aging was fixed partway through the story, but then he got caught in a time loop and kept living his teenage years over again.

“Red touch gray, get away. Blue touch green, be serene.”
Distinguishing between coral snakes and scarlet kingsnakes if they had different colors, but also one of them was still poisonous and the other wasn’t.

“All good boys deserve fudge.”
Remembering the notes on the lines of the treble clef musical staff slightly wrong.

“See from their view? Then that’s you. See from afar? Someone else they are.”
Determining whether someone is yourself or a different person.

“Ripping up a rare artwork, Edward realizes Friday Raphael paintwork biting afternoons aren’t really acceptable socially.”
The first word of every previous mnemonic in this list.

Read the whole story
RyanAdams
57 days ago
reply
Central Indiana
Share this story
Delete
1 public comment
rocketo
83 days ago
reply
so helpful!
seattle, wa

IEEE, BBC, and C2PA

1 Share
I hadn't planned to write about more C2PA problems so soon, but my last few blog entries on C2PA's problems has struck a chord with readers and some of their feedback is very time critical. (This time sensitivity is ironic since my last blog pointed out C2PA's problems with timestamps.)

IEEE

The first feedback I received mentioned a recent article (4-March-2024) at IEEE Spectrum. IEEE's David Evan Harris and Lawrence Norden reviewed Meta's proposed solution to AI-generated media. The article's title nailed the problem (their bold for emphasis):


Meta's AI Watermarking Plan Is Flimsy, at Best
Watermarks are too easy to remove to offer any protection against disinformation

The IEEE article references Meta's recent decision to label AI-generated images using visible watermarks, invisible watermarks, and metadata based on C2PA. Then it details the solution's core problems:
The most obvious weakness is that Meta’s system will work only if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Most unsecured “open-source” generative AI tools don’t produce watermarks at all.

...

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Given that it takes about 2 seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

We know this because we were able to easily remove the watermarks Meta claims it will detect—and neither of us is an engineer. Nor did we have to write a single line of code or install any software.

It's good to see that I'm not the only person who is realizing C2PA's ineffectiveness.

BBC

I have been repeatedly mentioning how my FotoForensics service is seeing altered pictures with C2PA metadata being used for fake IDs, fake medical documents, and other types of forgeries. This recent comment from a user called "bots don't cry" details the BBC's "verification" of a forged video. This person wrote:
BBC is the first large media to have implemented #C2PA for one of their video, available at https://www.bbc.com/news/world-latin-america-68462851

The video is a montage of 2 social media videos with a map and added text and logo overlay, done by the fact checking unit of BBC: BBC verify

Key finding

BBC verify went through 3 different manifests for the same video.
One of the video, authenticated as "Genuine" is "Fake"

manifest 1, published 2024-03-04T15:02:32.337Z
1: No Ingredient assertions, instead a CBOR:ReviewBody tag describing how they authenticated the videos (OSINT), and a link to the original url of one of the video
2: The info box about the C2PA metadata under the video mirror the CBOR:ReviewBody, but is heavily redacted.

manifest 2, published 2024-03-04T18:03:59.565Z
The info box about the C2PA metadata has changed, I check the video, and it has been republished
1: The video has a new manifest, doesn't include any reference to manifest 1
2: No Ingredient assertions
3: The manifest 1 CBOR:ReviewBody is stripped out from all urls
4: The info box about the C2PA metadata under the video is now identical to the CBOR:ReviewBody

manifest 3, published 2024-03-05T11:25:10.796Z
The video file is moved to a cloudfront.net CDN server (The video was originally on an AKAMAI CDN for a BBC R&D metadata api)
1: The video has a new manifest, doesn't include any reference to manifest 1 or 2
2: It's manifest 2 with a new signature

Fake video finding, 2024-03-06 in the morning https://twitter.com/botsdontcry1/status/1765285019646308374
I found out that one of the 2 social media video was edited with an added soundtrack (Loud Gunshots) to create more dramatic effects, before being published to the social media.
It's easy, The social media (TikTok) has a disclaimer about the added soundtrack, but the fact checkers didn't bother.
Just listening to the audio of the video feels weird, a simple display of the audio waveform show a repetitive 15s in length sample (The gunshots).

Thanks for your detailed review about C2PA!

Tracking Sources

The news article that "bots don't cry" references is at the BBC, titled "Haiti violence: Haiti gangs demand PM resign after mass jailbreak". The story is dated "3 days ago". (As I write this, it is March 7th, so that would be March 4th.)



Halfway down the article is the video labeled, "Verified video of people fleeing Haiti violence." On the video itself are two different notations saying it has been verified.



Below the video is a section that shows "How we verified this". This is where the BBC details the results from Content Credentials and their own verification findings.



The BBC's description of the source material should be a huge red flag. They wrote, "The video from the cemetery was first uploaded to TikTok on Saturday 2 March at 1750G (1250 local)."
  • Where is this source TikTok video for independent validation? It's not mentioned in the article and not in the C2PA metadata from the BBC video that I downloaded. Moreover, the C2PA assertions in the video I downloaded from the BBC differ from the information provided by "bots don't cry". It's very likely that either (A) the BBC revised their video after "bots don't cry" retrieved it, or (B) the BBC provides different versions of the video based on your browser, bandwidth, or other factors. I'm only able to evaluate the version that I could download.

    In this case, I got lucky because "bots don't cry" had a version of the BBC's video with additional information and noticed that the TikTok video splice listed the username as "usermaloulou557". The username led to the source video URL.

  • Who uploaded it to TikTok? A user named "usermaloulou557". Who is this person? Where is he located? Did anyone vet him or the content he posts? This isn't a self-proclaimed "citizen journalist" -- this is just someone on TikTok.

  • Although it was uploaded to TikTok on March 2, we don't know when the video was created.
Ironically, the BBC wrote an article in 2022 about the spread of fake videos on TikTok. They wrote, "False TikTok videos draw millions of views" and "TikTok has emerged as one of the leading platforms for snappy false videos". Although the 2022 article was about false Ukrainian videos, it still holds true about other regions. I'm actually stunned that the BBC would use a poorly referenced TikTok video as an authoritative source.

But it gets worse! The BBC's verification process also mentioned a second video: "The video of the police station is from a Facebook post published on Sunday evening 3 March. We found no earlier versions of this video online." An unreferenced and anonymous Facebook video. While it was posted on Facebook on March 3rd, they didn't identify when it was recorded. Just because they couldn't find an earlier version online doesn't mean it is authentic or timely. Moreover, there is no link to the source video so it cannot be independently validated. The article at IEEE noted that Meta is flagging some content on Facebook with C2PA information. Did the unspecified video at Facebook have any C2PA-based vetting?

The BBC verification box includes a "View more" link at the bottom. Clicking it shows you the results at the Content Credentials web site.



According to Content Credentials, everything in the video appears legitimate. The cryptographic signatures are valid and signed by the BBC, and the notations about the dimension changes and publication are minimally consistent with the BBC's own notes: "Superficial edits were made to this content to improve technical quality, in line with editorial guidelines." (Personally, I think splicing a TikTok video to a Facebook video is more than "superficial edits", but they can interpret their own editorial guidelines.)

The Content Credentials results doesn't mean the video content is legitimate. It only means that the BBC has a claim that the BBC's edited version of the BBC's video came from the BBC. But they could have easily done this with a simple IPTC or XMP metadata attribution field; C2PA doesn't add any additional value to this vetting process. (But it sure looks impressive to see the word "Verified" labeled twice on the picture and three times below it! It's like screaming "No really! It's legit!" over and over so people will believe it.)

Diving into Details

I grabbed a copy of the BBC's video that they listed as verified. If you are sensitive to this kind of content, then beware: There are the sounds of gunfire and the video shows people running (and walking), but no blood or gore.



To analyze this video, I used my Hintfo service to view the metadata. The main things I noticed:
  • The video is annotated by the BBC. They added red and blue Verified banners, a mention of sources (TikTok and Facebook), and occasional text to the video. They also spliced in maps. The BBC's final composite is not the original video. The source TikTok video does not have these annotations.

  • There are a wide range of timestamps that span days:

    • The XMP metadata (stuffed in the C2PA data structure) says it is from Mar 02, 2024 and attributed to TikTok. However, there is no time provided.

    • The text description mentions a Facebook video from March 3rd, but there is no metadata timestamp referencing this content. The C2PA metadata does not have an "assertion" record for this video.

    • The Quicktime metadata says the video is from 2024-03-04 14:17:42. This has no attribution but corresponds with the day the article was published.

    • The C2PA metadata says it is from 2024-03-05 11:20:41 GMT and 2024-03-05 11:25:10 GMT. This is after the article was published and corresponds with the BBC verifying the video.

    • The trusted third-party authenticated timestamp is from 2024-03-05 11:25:45 GMT. This shows when they finished authenticating the video. It is unclear if the BBC originally published the unverified video. What is is clear: the BBC added in the verified video without revising the article's publication date.

  • The metadata includes two separate GPS coordinates. The first is 18.534108,-72.344307 and places the location in the center of the Grand Cemetery. I don't know the cemetery's layout or the GPS accuracy; this could be anywhere in the area. I suspect that the BBC selected an arbitrary location in the cemetery.

  • The second GPS record has enough decimal points to be precise to the subatomic level! Keep in mind, 'precise' and 'accurate' are not the same things. The coordinates identify the location as 18.73659357791219,-72.41756989951344. This is the location of the police station in the city of Cabaret that was spliced in from the unidentified Facebook video.

  • When playing the BBC's video, watch for very clear splices where the scene jumps and people suddenly appear. For example, at 10 seconds into the BBC's version of the video, a man in a white shirt suddenly appears and is moving past the camera. Even though the person suddenly appears, the audio does not have a corresponding splice.

  • The source video at TikTok has similar sharp splices with no interruption in the gunfire audio. (E.g., 20 seconds, 31 seconds, 36 seconds, and so on.) Here's the source video from TikTok with the obvious splices (watch for sudden jumps in the footage):

  • The source TikTok video lacks the sense of urgency that you would expected from nearby sustained automatic gunfire. For example:

    • At the end of the video, you can see people walking calmly out of the cemetery.

    • In the last few seconds, they pass a man in a black shirt who is carrying a bucket and heading in the opposite direction. The bucket man doesn't appear concerned.

  • Even though the photographer in the TikTok video is moving, the gunfire's volume never changes. If the photographer were actually moving away from (or toward) the gunfire, then the volume should change.

  • Although the TikTok video contains numerous visual splices, none of the splices are aligned with audio splices.
The user "bots don't cry" went one step further in this analysis, identifying the source audio at TikTok:


The audio just sounds like automatic gunfire. It is the same audio track from the BBC and source cemetery videos. The TikTok gunfire video is from December 2023. However, we cannot date this audio clip; it may be reused from some older audio recording.

The video used by the BBC is spliced from unvetted sources, and one of the sources was spliced and has a fake audio track. And yet, somehow the BBC "verified" this as being authentic. Then the BBC applied a C2PA signature to help support their false claim of authenticity.

Flimsy at Best

This kind of false validation is how I anticipated that C2PA and Content Credentials would be used. The entire C2PA model is built on "trust":
  • We trust that the metadata accurately reflects the content. This explicitly means trusting in the honesty of the person inserting the metadata.

  • We trust that each new signer verified the previous claims.

  • We trust that a signer didn't alter the previous claims.

  • We trust that the cryptographic certificate (cert) was issued by an authoritative source.

  • We trust that the metadata and cert represents an authoritative source. While the cert allows us to validate how it was issued ("trust but verify"), it doesn't validate who it was issued to or how it is used.

  • We trust the validation tools to perform a proper validation.

  • We trust that any bad actors who violate any of these trusts will be noticed before causing any significant damage.

  • And if there is a bad actor, we trust there is someone who notices, somewhere to report it, and someone who can do something about it. And we trust that this will happen quickly.
All of this trust is great for keeping honest people honest. However, it does nothing to deter someone with ulterior motives. As the scholars at IEEE noted, it is trivial to bypass C2PA detection. And with the BBC, we have untrusted content (obvious splices, fake audio, and no verifiable dates), unvetted metadata, and unverified prior claims wrapped in a cryptographic certificate from the BBC. (But the BBC said it was verified five times and in red and blue colors! So it must be legitimate!) As one of the C2PA founding members, it's disappointing that BBC would use this technology to justify a flawed and incomplete verification process.

C2PA doesn't authenticate, doesn't validate, and doesn't provide reliable provenance information. It can easily be used to create forgeries. With the BBC, we have now seen C2PA used by a media outlet to support an unproven verification claim. It's only a matter of time before other fields experience this new type of fraud introduced by C2PA: fraud that provides an unjustified and false sense of authenticity to the typical reader. (Not my readers, my readers totally get it!)

Huge thanks to Shawn and bots don't cry for these timely findings.
Read the whole story
RyanAdams
110 days ago
reply
Central Indiana
Share this story
Delete

Text Recognition Map Search

1 Share
The David Rumsey Map Collection has unveiled its new amazing Text on Maps feature which allows users to search one of the world's largest collections of digitized maps by text. The David Rumsey Map Collection contains over 57,000 geo-referenced vintage maps. Using the new Text on Maps feature you can now search for where any word appears on any of those 57,000 maps. Old maps are an invaluable
Read the whole story
RyanAdams
326 days ago
reply
Central Indiana
Share this story
Delete
Next Page of Stories