Journal Entries By Tag: #WebDev

(Page 1 of 2)

Assorted journal entries with the tag #WebDev.


Site Update: Welcome to the Grid!

TL;DR — Some details around the recent addition of a scrolling background grid to the site.

👓 2 minutes

Recently, I decided to try my hand at some CSS shenanigans, and spent a few hours replacing this site’s long-serving background image with a scrolling grid background.

I’ve been obsessed with digital grids ever since I first saw the movie Tron (presumably during its initial HBO release, when I was around 6). Tron and Flynn were some of my first heroes (they fought for the users), and I remember being blown away not just by the movie, but by being able to play the same game they play in the movie via the incredible Tron arcade cabinet. I distinctly remember going to the Chuck E. Cheese’s near our house and playing it, complete with the special blue joystick, and just like the movie, it was amazing. And it was all grids.

Tron Arcade Machine by Darth-Wiki-Man, used under CC-BY-SA
Lightcycle game, screenshot from Tron by Bally Midway

So that’s the “why”, as for the “how”…

The capabilities of #web rendering engines (AKA #browsers) have improved immensely over the last few years, particularly in the area of CSS effects. A link to a link to a link lead me to a couple of stack exchange questions and a collection of fantastic synthwave-inspired CSS effects. The next thing I know, I’ve replaced the site’s static background image with a scrolling one.

Except… I know that not everyone likes moving background effects, so the only responsible way to add an effect like that is with a toggle that allows the site visitor to turn it on and off at will. And the only responsible way to add an interactive toggle like that is via progressive enhancement: visitors without JS enabled (or those whose browsers don’t support <script type="modules">) will get just a static background grid, but those who do have JS get both the scrolling grid AND the toggle, tying the presence of the feature to the ability to disable it.

If you want to know more specifics, check out the commit on my self-hosted git server, in particular the changes to the scripts.js and styles.css files.

Because I, too, fight for the users.


Now Playing: "Camp Happy Island Massacre" for DOS

TL;DR — I wrote a simple computer game in 1997 called Camp Happy Island Massacre which I now have running online here.

👓 3 minutes

Way back in 1997, I released my first (and, so far, only) computer game, Camp Happy Island Massacre (hereafter referred to as #CHIM), a comedy-horror text game for the DOS operating system. Originally written while I was still in college, the game is about a cursed summer camp and the 3 surviving counselors who try to stop a horrific force before it claims them. I put it out for free (more-or-less) on the internet of 1997, and though it was never a huge success, I’ve always been proud of it.

Fast forward to 2018: although I’ve known about the Internet Archive’s MS-DOS Software Library for some time, I’d never really thought about the specifics of how it works until I read an article which talked about the Em-DOSBox project. Em-DOSBox is a port of DOSBox emulator which runs in the browser via the Emscripten JavaScript library. As I was reading the article, a thought struck me: could I get CHIM running in the browser?

I decided it was at least worth a shot, so I began with step 1, building Emscripten from source. That went off without an issue, so I moved on to the next step, building the DOSBox JS files, and that’s where I ran into my first snag: the only way I was able to get it to build was by disabling the “emterpreter sync” function (emconfigure --disable-sync ./configure). It complained about the lack of empterpreter sync, but it built, and that lead me to the next step, packaging the dosbox.js file for use in a browser via the ./packager.py command. Even though this seemed to work great, there was obviously something wrong with my resulting files, as the JavaScript engine in my browser kept throwing an error (“missing function in Module”). After toying around with it for a while, I found that, if I used ./repackager.py (the Emscripten-less version of the packager) to package my files, I could get an empty DOSBox window to come up, but it still wouldn’t load the actual EXE.

By this point, I was flummoxed, and was about to give up. And that’s when I found the answer: js-dos!

After 30 minutes with this tutorial (and some source viewing on a couple of js-dos game pages), I was able to get CHIM working.

But my work wasn’t finished yet. Even though I’d kept nearly all of the files for CHIM for the last 21 years (with the exception of the game’s original C++ source files, which were lost in a hard drive crash shortly after it was released), I hadn’t really messed with them much in the last decade, so there was some cleaning up to be done. I updated some of the questions (and answers) in the FAQ, replaced the license, and generally tried to clean up the supporting text files. And that’s when I ran into one last unexpected issue: text encoding.

You see, I had forgotten that, when I first wrote the game and the supporting files, I had used some primitive ANSI graphic characters in an attempt to enhance the look of it. And now, when I tried to view those files on my Linux laptop, those graphics came out… weird.

The fix was to convert the files from the “IBM-862” format to the modern UTF-8 format:

> iconv -f IBM862 -t UTF8 INTRO.TXT -o INTRO.UTF.TXT

This allowed me to edit the files in Mousepad (and serve them up with Nginx), while still keeping the graphics intact. Finally, I added the Unicode Byte Order Mark, which makes it display correctly in the browser, even when served from a file:// URL (you can add the BOM via Mousepad, under “Document -> Write Unicode BOM”).

So, if you’d like to try the game out, check it out here, and good luck - you’re gonna need it!


Dat's Incredible!

TL;DR — I decided to try out the Dat protocol, and now have a copy of this site running on it.

👓 3 minutes

Recently, I was inspired by Aral Balkan’s concept of Web+ and his work with dats to add Dat protocol support to my own site(s). Since my experiences might be useful to others, I thought I’d share them here.

A #dat , or Dat #archive (as I understand it) is a sort-of cross between a Git repository and BitTorrent file share. It was initially developed for storing and sharing data in a decentralized way, and so makes for a great way to share an archive of static content (like a website).

To create and share a Dat archive, I needed to install the Dat protocol. Since it uses NodeJS, this is done via the Node Package Manager:

sudo npm install -g dat

I already had a directory in mind to share (the directory that holds the static files of my website), so sharing the dat was as simple as: going into that directory and typing the following commands:

# Enter the directory with the static files
> cd www

# Create the dat
> dat create

Welcome to dat program!
You can turn any folder on your computer into a Dat.
A Dat is a folder with some magic.

Your dat is ready!
We will walk you through creating a 'dat.json' file.
(You can skip dat.json and get started now.)

Learn more about dat.json: https://github.com/datprotocol/dat.json

Ctrl+C to exit at any time
Title: It's Eric Woodward (dotcom)
Description: Dat for the site running at https://www.itsericwoodward.com/.
Created empty Dat in /home/eric/www/.dat

Now you can add files and share:
* Run dat share to create metadata and sync.
* Copy the unique dat link and securely share it.

dat://3dccd6e62ea8e2864fb66598ee38a6b4f4471137eebc23ddff8d81fc0df8dbbc

# Share the newly-created dat
> dat share

And that’s it. 😃

To verify that I was sharing it, I pointed my web browser to https://datbase.org/, entered my DAT URL in the search box at the top-left of screen, pressed ENTER, and, lo and behold, my website came up.

Another way to verify that a DAT URL is being actively shared is to view it through the Beaker Browser, a special web-browser-like application used for viewing dats. To get it, I went to https://beakerbrowser.com/install/, and downloaded the AppImage file for Linux (no link provided here because the Beaker Browser still in pre-release, and any image that I point to from here will probably be old by the time anyone reads this).

Then, I launched it:

# Make it executable

> chmod a+x beaker-browser-0.8.0-prerelease.7-x86_64.AppImage

# Launch the AppImage

> ./beaker-browser-0.8.0-prerelease.7-x86_64.AppImage

After a few moments, the application came up, at which point I entered my dat URL into the Beaker Browser’s address bar, hit the ENTER key, and just like that, my website came up.

Unfortunately, unless your data is wildly popular (and thus located across multiple hosts in the swarm), it is only shared for as long as the dat share command is running on your machine. So, to make the files permanently available via Dat, I had to create a service file that would run the dat share automatically. To do this, I created file in /etc/systemd/system called dat-share-web.service, which looked like this:

[Unit]
Description=DAT Share for It's Eric Woodward (dotcom)
After=network.target

[Service]
User=eric
Type=simple
WorkingDirectory=/home/eric/www
Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ExecStart=/usr/bin/node /usr/local/lib/node_modules/dat/bin/cli.js share
Restart=always

# Output to syslog
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=dat-share-www

[Install]
WantedBy=multi-user.target

After I turned-off my existing DAT share, I started the new service, like this:

# Start the service
> sudo systemctl start dat-share-www

# Enable the service
> sudo systemctl enable dat-share-www

Now, my Dat archive will be served 24x7, even after I restart the server.

Everything worked great, but there was one more thing that I wanted to try: I had noticed that, in addition to Aral’s site being available at a usual DAT address (dat:// plus a 64-charactercode), it was also available at his own domain name (dat://ar.al/). After a quick search, I found that what I was looking for was DAT over DNS, which can be implemented one of two ways: either via DNS TXT records or by placing a file with a link to the DAT at a specific “well known” location. Since the second option is the one that the DAT project seems to recommend (and it was dead simple to add), that’s what I did. So now, if you launch the Beaker Browser and open the site dat://www.itsericwoodward.com/, it will take you to the DAT version of my site. Neat!

The DAT protocol is a simple but powerful way to share static content, and can help add a layer of redundancy to a website. Hopefully, your experiences in using it will be as positive as mine.

References:


Loose Ideas for the Next Social Web

TL;DR — Some thoughts about what I would like to see next in the social media / web space.

👓 2 minutes

Inspired by both this toot and my recent dabblings in the Fediverse, I just wanted to take a moment and collect some thoughts about what I would like to see next in the #SocialMedia / #web space.

  • I like the idea of using a hub-and-spoke model, where each actual edge device (phone / tablet / etc.) connects to some kind of always-on server (either a cheap virtual machine or a home-based server), which would be run by a tech-enabling friend, like BBSes used to be.
  • All content creation and such would occur on the edge device, probably via a progressive web app hosted on the hub (to enable #offline creating), and which would connect to its hub when convenient to upload any newly created content.
    • Here, “content” means basically anything that you can create on a social media site - text, photos, replies, whatnot.
  • The content would be marked up with IndieWeb microformats-2 tags, enabling easy consumption / sharing.
  • Since the content creation / editing would occur on the spoke devices, the hub would be used primarily for caching and speedy connectivity (to prevent issues with asymmetric connection speeds that would prevent direct sharing between the edge devices).
  • The hub would collect incoming messages for the user and cache them until the user’s device can connect to the hub to pull them down into their edge device(s).
  • The hub would also support webmentions (both in and out), webfinger, and any other useful protocols (ActivityPub, to enable more clients?).
  • Ideally, each user of this kind of system would have a subdomain of their own (https://eric.example.com), which has their public info, profile pic, and public posts, and which could serve as a Web sign-in endpoint via the presence of an h-card (listing their OAuth2-compatible accounts).

I freely admit that this idea still has some issues, since it is both incredibly hand-wavy and would still require tech-smart gatekeepers to run the hubs, but eventually even that second issue could be mitigated somewhat by turning the software into a single-click install option for a Pi or similar device (or pre-installed on such a device, with a plug-and-play setup of some kind, or pre-built images for VPS hosting).

I’m open to thoughts / suggestions / comments.


The Web is Dead! Long Live the Web!

👓 3 minutes

In browsing through some of the fallout from the arrival of Facebook’s Instant Articles, I stumbled across a couple of great pieces by Baldur Bjarnason (@fakebaldur) that go a long way to explain how we got into the situation we’re in, and why it’s us web developers who are responsible.

In the first, he takes on the ongoing debate about apps vs. the web, and makes the assertion that it isn’t “the web” that’s broken, it’s how (we) web developers are using it that’s broken (emphasis his):

Here’s an absolute fact that all of these reporters, columnists, and media pundits need to get into their heads:

The web doesn’t suck. Your websites suck.

All of your websites suck.

You destroy basic usability by hijacking the scrollbar. You take native functionality (scrolling, selection, links, loading) that is fast and efficient and you rewrite it with ‘cutting edge’ javascript toolkits and frameworks so that it is slow and buggy and broken. You balloon your websites with megabytes of cruft. You ignore best practices. You take something that works and is complementary to your business and turn it into a liability.

The lousy performance of your websites becomes a defensive moat around Facebook.

In other words, if the mobile web is dead, it’s because we developers killed it.

On a side note, I wonder if this isn’t alot of the reason that millennials have increasingly preferred using apps to browsers - because mobile browsing is, for many, a needlessly painful experience.

In the second piece, he even goes so far as to explain why people can’t seem to get on the same page about how “the web” should be: Because they’re all talking about different versions of it:

Instead of viewing the web as a single platform, it’s more productive to consider it to be a group of competing platforms with competing needs. The mix is becoming messy.

  1. Services (e.g. forms and ecommerce, requires accessibility, reach, and security)
  2. Web Publishing (requires typography, responsive design, and reach)
  3. Media (requires rich design, involved interactivity, and DRM)
  4. Apps (requires modularity in design, code, and data as well as heavy OS integration)

Just to drive this point home, he makes reference to the Apple Pointer issue from earlier this year:

This is just one facet of the core problem with the web as an application platform: we will never have a unified web app platform.

What Apple, Google, Microsoft, and Mozilla want from web applications is simply too divergent for them to settle on one unified platform. That’s the reason why we’re always going to get Google apps that only work in Chrome, Apple Touch APIs that are modelled on iOS’s native touch model, and Microsoft Pointer APIs that reflect their need to support both touch and mouse events on a single device at the same time. There really isn’t an easy way to solve this because standardisation hinges on a common set of needs and use cases which these organisations just don’t share.

A more conspiracy-minded individual might even believe most of the major vendors would be better off if the standards never really do work out, since it would prevent “native-esque” web apps from cutting into their bottom-lines in their respective app stores. But I digress.

Speaking for myself, I know that I had never really considered this point when talking / ranting about “the web”. What’s more, I wonder if half of our inability to come to agreement on some of these issues is simply a matter of terminology getting in the way of having meaningful conversations. I mean, apps aren’t “better” than “the web”, because they are essentially part of (one form of) it: they use the same web protocols (HTTP / HTML) as the rest of the “browsable” web, they just use them on the back-end before glossing it over with a pretty “native” front end.

In fact, one might argue that this is the reason that the one area of web standards that has actually seen some progress in the past few months is the HTTP2 spec - an update to how data is transmitted on-the-wire, which should bring notable speed and security improvements to anyone that uses HTTP (including all of those native apps I mentioned earlier). After all, improving this part of “the web” is the one thing that all of the players involved can agree on.