Archive for the 'General' Category

Initial thoughts on Cloudflare Pages

I’m investigating Cloudflare Pages as a hosting platform for static sites and React SPAs (amongst other things).

My first impression: This is a really simple tool which handles almost everything about hosting for me. I’m practically sold already. Hook it up to your Github (or Gitlab) repository and for the most part, everything else happens automagically. That simplicity does come with limitations though, so it might not suit everybody.

Custom domains

Hooking up a custom domain to your CF Pages site is really simple. It’s completely automated if Cloudflare manages your DNS. If not, it’s just a case of adding a CNAME entry wherever your DNS is managed. So if I wanted to point sub.example.com to my CF pages site at example.pages.dev, i’d deploy this CNAME record:

NAMETYPEVALUE
sub.example.comCNAMEexample.pages.dev

I was also able to point a custom domain at the latest deployment from a specific Git branch, in this case the develop branch:

NAMETYPEVALUE
sub-develop.example.comCNAMEdevelop.example.pages.dev

Environments and Previews

CF Pages can be a bit limiting if you have multiple deployment environments (e.g. development, staging, production). You can only have two sets of environment variables – one for Production and another for everything else. If you need more than that, you might want to look elsewhere for now. Luckily, it’s enough for me (for now).

Speaking of environment variables, at present you have to set them using their web interface, instead of a config file. I don’t have many, so it’s not an issue for me – but if your app makes heavy use of them, they might become a bit cumbersome to manage.

With all that said, CF Pages generates a preview build for every commit you push to Github. This is useful for getting someone else to test your work before merging it, and may reduce the need for different environments. Even if you don’t use CF Pages as your main hosting platform, the preview builds are a useful way to test your site before you go to production.

Workers and server-side code

I haven’t gotten into it yet, but CF Pages now integrates with CF Workers, which is their variant on Cloud Functions / Lambda / Serverless. You also get access to KV, their key-value data store. This means Cloudflare Pages doesn’t just need to be for static sites – there’s potentially a lot more flexibility available.

Other limitations and known issues

Handily, Cloudflare have documented some of the known issues and limitations of CF Pages.

Named vs default exports in React projects

When working in React SPAs, I tend to use named exports for the most part, e.g.

export const ProductList = () => {
  return <>A product list component</>;
}

And then, in another file…

import { ProductList } from "components/ProductList";

Part of this is is personal preference. It’s also down to my tooling: Flow’s autoimports feature seems to work best with named exports.

I usually only use default exports when creating screens or pages. This comes down to code splitting: I often lazy-load a screen when the user first navigates to it, and React.lazy presently only works with default exports. e.g.

const Products = () => (
  <>
    <h1>Products</h1>
    <ProductList />
  </>
);

export default Products;

And then in another file…

const Products = React.lazy(() => import("./screens/Products");
const Dashboard = React.lazy(() => import("./screens/Dashboard");

const Home = () => {
  return (
    <Router>
      <Route path="/products" component={Products} />
      <Route default component={Dashboard} /> 
    </Router>
  );
};

Redirector

As part of my work, I sometimes I need to redirect links to a local installation of a web app, so I can debug a particular issue.

For example, I might receive an email with a link to https://test.example.com/what/ever?thing=stuff but I want to see what happens with the code running at http://localhost:3000/what/ever?thing=stuff. For a while I’ve been copying the link, pasting it, manually editing it, and carrying on. But I thought there had to be a better way.

It turns out Einar Egilsson’s Redirector extension is a better way. Install it in your browser (Firefox is my daily driver) and add a redirect like so:

Include Pattern: https://test.example.com/*
Redirect to: http://localhost:3000/$1

So when the redirect is enabled, any links to https://test.example.com will be redirected to localhost:3000. Thank you, Einar!

Google Fonts vs GDPR

The Bavarian state court in Munich, Germany, on 20 January 2022, decided that using Google fonts in your site breaches the GDPR.

Ton Zijlstra (via Adactio)

Paged.JS and CSS Flex / Grid

Paged.JS is a marvel. It’s a Javascript library which helps paginate HTML content, making it suitable for print output. It implements a lot of the CSS Paged Media specifications – so I guess you can think of it as a polyfill. I’ve combined it with Puppeteer to transform a variety of HTML documents into PDFs, complete with repeating headers, footers, footnotes, etc.

It’s amazing, but imperfect. I’ve run into a few issues in the current version (v0.2.0 at the time of writing). My HTML documents make extensive use of CSS flex and grid for layout. For the most part this works fine with Paged.JS, except at the borders between pages. For example, I can use break-inside: avoid to avoid page breaks from happening inside an element:

@media print {
  .block {
    break-inside: avoid;
  }
}

However, if that element happens to be either a flex parent (it has display: flex;) or a flex item (it’s a direct child of a flex parent), the break-inside property is either ignored, or causes other “interesting” issues. Improvements are on the roadmap, but in the meantime we need a workaround. It’s not ideal, but the most reliable way I’ve found is to fall back to older layout techniques, like block, inline-block or float:

@media print {
  .block {
    display: block;
    break-inside: avoid;
  }
}

They’re obviously not quite as flexible, but they work!

It’d be really nice if the various browser engines out there implemented more of the Paged Media specifications natively. As much as I love Paged.JS, I’d love to be able to remove it. Maybe one day…

When Flow typing didn’t quite work properly (and it was all my own fault)

I’m working on a Javascript and React application, which uses Flow to enforce types. For the most part, I love working with Flow, but it hadn’t been working properly for a while.

My IDEs (both Webstorm and VSCode) both reported type errors correctly, but running flow check at the command line (or on CircleCI) always returned No errors! Great, except there were errors lurking in there – loads more than Webstorm was finding!

It turned out to be because I had this line in my .flowconfig:

[options]
module.system.node.resolve_dirname=src

That was there so I could use absolute imports (meaning I could type import Foo from "utils/bar"; instead of import Foo from "../../../../../../../../utils/bar";). Unfortunately that turned out to a broken config, which masked a hell of a lot of problems! Luckily I wasn’t the first person to run into this. The correct way to do that is:

[options]
module.name_mapper='^utils/\(.*\)$' -> '<PROJECT_ROOT>/src/utils/\1'

And add another module.name_mapper line for each top-level folder under src.

If you’re importing from a file at the top level (e.g. import type { Foo } from "flow-types";) the module.name_mapper will look something like this:

module.name_mapper='^flow-types' -> '<PROJECT_ROOT>/src/flow-types.js'

Unfortunately, when I corrected the problem, it revealed a lot of Flow errors in my codebase. Flow has got a lot more strict since I introduced the problem (and that’s a good thing). Luckily, most of those issues are relatively simple to fix!

What to do when the HTML download attribute is ignored

It turns out web browsers will usually ignore the <a download="filename"> HTML attribute on cross-origin requests.

The answer is for the server to set the HTTP Content-Disposition header in the response:

Content-Disposition: attachment;

This assumes the filename on the server is correct. In my case (for complicated and boring reasons), it is not, so I also need to set a filename in in the Content-Disposition HTTP header, e.g. Content-Disposition: attachment; filename="example.pdf".

In my case, the files are stored on a Google Storage bucket. Their name is not the same as the name the user wants (e.g. the file is called export_yVW4Bg-f63rpZIUiXvWct.pdf but the user wants export_31Jan2022.pdf). So when I create the file, I also need to set the Content-Disposition header accordingly.

This code snippet is part of a Google Cloud Function running on NodeJS, and I’m using the @google-cloud/storage library:

const { Storage } = require("@google-cloud/storage");

// Set the metadata to make the PDF download correctly
const setFileMetadata = async (
  bucketName,
  fileName,
  downloadFileName
) => await storage
      .bucket(bucketName)
      .file(fileName)
      .setMetadata({
        contentDisposition: `attachment; filename="${downloadFileName}"`,
        contentType: contentType,
      });

await setFileMetadata("my-unique-bucket-name", "export_yVW4Bg-f63rpZIUiXvWct.pdf", "export_31Jan2022.pdf");

Google’s engineers have posted a more comprehensive code sample, showing how you can also set other headers (e.g. cache-control) and metadata on the file.

The 1996 GT LTS-3

https://youtu.be/em1MQRiKdrQ

I had one of those, in black! It was rad. Also terrible and squeaky. But mostly rad.