<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>zachsteiner.com - Posts</title>
        <link>https://zachsteiner.com</link>
        <description>Blog posts from Zach Steiner on UX, front-end engineering, workplace psychology, music, and sometimes cooking.</description>
        <lastBuildDate>Sun, 01 Mar 2026 16:10:57 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>All rights reserved 2026, Zach Steiner</copyright>
        <item>
            <title><![CDATA[Vertical Layout Made Easy]]></title>
            <link>https://zachsteiner.com/posts/2025-01-18-layout-components</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2025-01-18-layout-components</guid>
            <pubDate>Sat, 18 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[When I used Tailwind on a project, there are vertical space utility (`space-y-4` or `divide-y`) class that made ensuring vertical rhythm and dividers pretty ea]]></description>
            <content:encoded><![CDATA[<p>When I used Tailwind on a project, there are vertical space utility (<code>space-y-4</code> or <code>divide-y</code>) class that made ensuring vertical rhythm and dividers pretty easy. It had one big problem, you needed to add top padding to the children in the layout. Not hard, but it had a big gotcha: conditional rendering. If you have a column of sections to space or divide where all the sections are conditionally rendered, you have to add that business logic for the top padding for section suddenly becomes the first section in the group.</p>
<p>When I moved to a project that didn't have Tailwind, I searched for an easier approach.</p>
<p>I experimented with flex and grid layout, but came to the following snippet. This is simplified and converted to pure CSS. The production code uses <a href="https://styled-components.com/">styled-components</a> to fit into the app conventions for styling. The principles are similar, though:</p>
<pre><code class="language-css">.vertical-space > :not(:last-child) {
  /* Sets the gap for the individual child */
  --gap: var(--component-gap);

  margin-block-end: var(--gap);
}
</code></pre>
<p>To start with a basic vertical spacing, I simply add a <code>margin-block-end</code> to each direct child, unless it's last. Pretty easy! But what about that custom property? This ensures one can nest these layouts with different gaps between children. I found pretty quickly that if I nested a <code>--component-gap: 0.5rem</code> inside a <code>component-gap: 1rem</code>, it wouldn't work correctly, unless the custom property was scoped to the children. This allows you to nest larger or smaller gaps without collisions. This allows a layouts like a form with input groups that all leverage this layout.</p>
<p>Why didn't I just use flex or grid and leverage the <code>gap</code> property? That would be a lot easier, for sure, but there are cases where you'd need add extra styling to children to make sure each child maintained full width. The goal here is to let the parent container do the lifting without the children needed to know what's going with their container. The custom property can be update via the React component for each usage if it needs to be different than the default.</p>
<h2>Vertical Space with Divider</h2>
<p>And now for adding a divider between children. Tailwind does this pretty well (though it requires a few too many classes without a wrapper component), but this is where it requires adding top padding, which can get tricky when something like a card/tile is a child.</p>
<pre><code class="language-css">/* Sets the default border */
:root {
  --border: var(--border-width) var(--border-style) var(--border-color);
}

.vertical-space-divider {
  --border-color: var(--component-border-color);
  --border-style: var(--component-border-styled);
  --border-width: var(--component-border-width);
}

.vertical-space-divider > :not(:last-child) {
  margin-block-end: calc(var(--gap) * 2);
  position: relative;
}

.vertical-space-divider > :not(:last-child)::after {
  border-block-end: var(--border);
  content: '';
  display: block;
  inset-block-end: calc(
    var(--gap) * -1 - var(--border-width)
  );
  inset-inline: 0;
  position: absolute;
  width: 100%;
  z-index: 1;
}
</code></pre>
<p>It's a similar idea to above, but adds the <code>::after</code> pseudo-content to position the divider. No <code>hr</code> or conditional top border needed. The margin here is double the desired gap to obviate top padding. The inset for the divider uses some <code>calc()</code> to negatively offset based on the gap and border width, so it's centered between the children. The custom properties allow for component overrides of the styled pieces of the border, so a given use can change the color, style, or width as needed. In production, these are locked down via tokens, so the API is simpler and results in more consistent styling than the chaos of arbitrary values.</p>
<p>This also solidifies the choice not to use <code>grid</code> or <code>flex</code> since as of early 2025, we cannot style grid lines. Once that gets added to the spec and becomes well supported, it may well be a simpler alternative.</p>
<p><strong>Caveat</strong>: the one down side to this approach I've found is falls down with children that have <code>overflow: hidden</code>. I discovered this laying out a column of accordion components. The workarounds are not hard, but a bit annoying. It's either wrapping components in a <code>div</code> or design components such that <code>overflow:hidden</code> is never on the outer most element. That seems a reasonable trade off for the ease of use in the vast majority of use cases.</p>
<h2>In Use</h2>
<p>The above is pure CSS that can be used as utility classes, but in our app this is all wrapped in <code>VerticalSpace</code> and <code>VerticalSpaceDivider</code> components. Here's what it might look like in use:</p>
<pre><code class="language-jsx">&#x3C;VerticalSpaceDivider gap="space-lg">
  {isAdmin ?
    &#x3C;VerticalSpaceDivider gap="space-sm">
      &#x3C;Input>Input 3 (admin)&#x3C;/Input>
      &#x3C;Input>Input 4 (admin)&#x3C;/Input>
    &#x3C;/VerticalSpaceDivider>
  : null }
  &#x3C;VerticalSpaceDivider gap="space-sm">
    &#x3C;Input>Input 1&#x3C;/Input>
    &#x3C;Input>Input 2&#x3C;/Input>
  &#x3C;/VerticalSpaceDivider>
  &#x3C;Button>Save&#x3C;/Button>
&#x3C;/VerticalSpaceDivider>
</code></pre>
<p>We have two input groups that have large space (e.g., <code>2rem</code>) and then within each two inputs each with small spacing (e.g., <code>0.5rem</code>) and then we have a save button. If the user is an admin, they will see both input groups with a divider, but non-admin users will see just a single input group. There is 32px of space between the two input groups and 16px between the inputs in each group. The save button is 32px from the last input group.</p>
<p><img src="/images/posts/2025/01/default-layout.jpg" alt="Example Layout">
<em>Default Layout</em></p>
<p><img src="/images/posts/2025/01/admin-layout.jpg" alt="Example Layout">
<em>Admin Layout</em></p>
<p>The semantic design tokens accessed via React component props (e.g., <code>gap</code>, <code>borderColor</code>, etc) that are typed for consistency and to remove tyranny of choice and its attendant inconsistency. This was another downside of the Tailwind approach. It was hard to remember whether the border color was <code>grey-100</code> or <code>grey-200</code> for instance or if the spacing was <code>v-space-4</code> or <code>v-space-8</code>. The simpler semantic tokens for color (light, base, dark) and t-shirt sizes space (sm, base, md, lg, etc) help the team more easily translate designs into code with less drift from page to page.</p>]]></content:encoded>
            <author>Zach</author>
            <category>CSS</category>
            <category>Design Systems</category>
            <category>JavaScript</category>
            <category>React</category>
        </item>
        <item>
            <title><![CDATA[How I Write Components in 2023]]></title>
            <link>https://zachsteiner.com/posts/2023-12-31-components</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2023-12-31-components</guid>
            <pubDate>Sun, 31 Dec 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[**tl;dr**: Composition based components make writing library components more flexible and easier to use. Avoid the dreaded propspocolypse.

I've been writing c]]></description>
            <content:encoded><![CDATA[<p><strong>tl;dr</strong>: Composition based components make writing library components more flexible and easier to use. Avoid the dreaded propspocolypse.</p>
<p>I've been writing components for at least a decade now. Maybe a bit longer if you include weird templating hacks in Microsoft FrontPage circa 1999. My technique and technologies have evolved over the years. The first pass at a component library was a global CSS (Sass powered) with HTML patterns. This eventually gave way to more complex Storybook-based component libraries. Most recently I've been working on <a href="https://design.clearbit.com">ClearKit</a> at Clearbit. My approach to component architecture and API has evolved considerably since I took over the reins. To start with, here's how I might have approached a component API when I was ramping up:</p>
<pre><code class="language-tsx">import { CKCollapsibleCard } from 'clearkit';
import { Info } from '@clearkit/icons';

return (&#x3C;CKCollapsibleCard
  className="...classes"
  footer={&#x3C;Footer />}
  header={&#x3C;Header />}
>
  &#x3C;p>Card body content&#x3C;/p>
&#x3C;/CKButton>);
</code></pre>
<p>This API is pretty standard for React components. The type would look something like this:</p>
<pre><code class="language-tsx">type CKCollapsibleCardProps = {
  children: React.ReactNode;
  className?: string;
  footer?: React.ReactNode;
  header?: React.ReactNode;
};
</code></pre>
<p>Then this API fell right on its face. When going through the variants we needed, sometimes the footer for the card needs to be always visible and sometimes it needs to collapse with the body, leading to the prop <code>isBodyFooter</code>. Then the cracks are began to show. Next I realized that sometimes the body and footer needed different styling (e.g., a background color or different padding). Next there were cases the card needed be open based on presence of data. Now a straight forward set of 4 props doubles to at least 8. The propspocolypse begins.</p>
<pre><code class="language-tsx">type CKCollapsibleCardProps = {
  children: React.ReactNode;
  className?: string;
  footer?: React.ReactNode;
  footerClassName?: string;
  header?: React.ReactNode;
  headerClassName?: string;
  isBodyFooter?: boolean;
  isDefaultOpen?: boolean;
};
</code></pre>
<p>How to streamline this? Composition! Let's work backwards from the end state.</p>
<pre><code class="language-tsx">const [isOpen, setIsOpen] = useState(false);

return (
&#x3C;CKCollapsibleCard
  className="...classes"
  isOpen={isOpen}
  onToggle={setIsOpen}
>
  &#x3C;CKCollapsibleCard.Header className="...classes">
    &#x3C;CKCollapsibleCard.Title>
      Title
    &#x3C;/CKCollapsibleCard.Title>
    &#x3C;h3>Subtitle&#x3C;/h3>
  &#x3C;/CKCollapsibleCard.Header>
  &#x3C;CKCollapsibleCard.Body className="...classes">
    &#x3C;p>Card body content&#x3C;/p>
  &#x3C;/CKCollapsibleCard.Body>
  &#x3C;CKCollapsibleCard.Footer className="...classes">
    Footer content
  &#x3C;/CKCollapsibleCard.Footer>
&#x3C;/CKCardCollapsible>);
</code></pre>
<p>This looks a bit more verbose that a list of props, but it has two big advantages:</p>
<ol>
<li>It's much more flexible. Each sub-component has a full props API without the parent component needing more and more props. Sub-components mostly pass children.</li>
<li>It's more readable. It looks like ... gasp ... HTML!</li>
</ol>
<p>First let's talk about extensibility. Remember the case of the footer need to be sometimes always visible and sometimes in the body? Easy, just move the footer sub-component into the body sub-component.</p>
<pre><code class="language-tsx">const [isOpen, setIsOpen] = useState(false);

return (
&#x3C;CKCollapsibleCard
  isOpen={isOpen}
  onToggle={setIsOpen}
>
   &#x3C;CKCollapsibleCard.Header>
    &#x3C;CKCollapsibleCard.Title>
      Title
    &#x3C;/CKCollapsibleCard.Title>
    &#x3C;h3>Subtitle&#x3C;/h3>
   &#x3C;/CKCollapsibleCard.Header>
   &#x3C;CKCollapsibleCard.Body>
   &#x3C;p>Card body content&#x3C;/p>
   &#x3C;CKCollapsibleCard.Footer>
    Footer content
   &#x3C;/CKCollapsibleCard.Footer>
  &#x3C;/CKCollapsibleCard.Body>
&#x3C;/CKCardCollapsible>);
</code></pre>
<p>Now the footer collapses with the body. The footer can have its background color via <code>className</code>. What about an <code>aria-label</code> on the header? Just add it to its props. Each of these sub-components can a consistent children plus whatever other props you need. In fact, we can extend  attributes for <code>footer</code> or <code>header</code> or any other HTML element using TypeScript. This way our containers always have access to the 100 plus attributes without the component author playing whack-a-mole with adding props for specific needs (I've been there!).</p>
<p>For the children, you can pass JSX directly or create small components that encapsulate their content. Whatever we need in the consuming app, we can do it.</p>
<p>What about the default open behavior? Notice in the example, the component is now controlled (i.e., <code>isOpen</code> and <code>onToggle</code>) from the parent. This is adds further extensibility. By default this component handles its toggling internally, but when we need to start open or need the parent to be aware of the card's state, we can pass in the props to control it. The batteries included version of the component requires few props, but we can let the parent take over when needed. Why not just <code>isDefaultOpen</code>? This could be useful, but we found that every time we needed this, we also needed the parent to know about the toggling. For instance, the app needs execute other logic when the card toggles like validation, analytics, or closing other cards. Being able to switch to controlled makes this possible without sacrificing the ability to drop a component into a page and have it work.</p>
<p>Looking at the above code, one might wonder how the body and header know about the <code>isOpen</code> and <code>onToggle</code> props. This is where React Context comes in. The parent component provides the context and the children consume it. This is a great way to avoid prop drilling.</p>
<p>Let's dive into the full implementation to see how it all comes together:</p>
<pre><code class="language-tsx">import classnames from 'classnames';
import React, {
  createContext,
  FC,
  HTMLAttributes,
  ReactNode,
  useContext,
  useEffect,
  useState,
} from 'react';

import {
  excludeChildrenByDisplayName,
  includeChildrenByDisplayName,
} from '../../utils/mapChildren';

// Normally this is imported imported to show this type
type CKContainerProps&#x3C;T = HTMLDivElement>
  = HTMLAttributes&#x3C;T> &#x26; {
  children?: ReactNode;
}

export type CKCardCollapsibleProps = CKContainerProps &#x26; {
  /**
   * Set the max-height of card body content so that the body will scroll if content is long.
   * Accepts any valid CSS height unit.
   * @default max-content
   **/
  cardBodyMaxHeight?: string;
  isOpen?: boolean;
  onToggle?: () => void;
}

export interface CKCardCollapsibleComposition {
  Header: FC&#x3C;CKContainerProps>;
  Trigger: FC&#x3C;CKContainerProps>;
  Body: FC&#x3C;CKContainerProps>;
  Footer: FC&#x3C;CKContainerProps>;
}

const cardPadding = 'p-6';
const cardBorder = 'border-gray-100 border-t';

type CardContextValues = {
  handleCardToggle: () => void;
}

const CardContext = createContext&#x3C;
  Omit&#x3C;CKCardCollapsibleProps, 'children' | 'className' | 'onToggle'> &#x26;
    CardContextValues
>({
  cardBodyMaxHeight: 'max-content',
  handleCardToggle: () => {},
  isOpen: false,
});

export const CKCardCollapsible: FC&#x3C;CKCardCollapsibleProps> &#x26;
  CKCardCollapsibleComposition = ({
  cardBodyMaxHeight = 'max-content',
  children,
  className,
  isOpen,
  onToggle,
  ...rest
}) => {
  const isControlled = isOpen != undefined &#x26;&#x26; !!onToggle;
  const [isOpenInternal, setIsOpenInternal] = useState(isControlled &#x26;&#x26; isOpen);

  useEffect(() => {
    if (isControlled) {
      setIsOpenInternal(!!isOpen);
    }
  }, [isOpen]);

  const handleCardToggle = () => {
    if (!isControlled) {
      setIsOpenInternal(!isOpenInternal);
    }

    onToggle?.();
  };

  return (
    &#x3C;CardContext.Provider
      value={{
        cardBodyMaxHeight,
        isOpen: isOpenInternal,
        handleCardToggle,
      }}
    >
      &#x3C;div
        {...rest}
        className={classnames(className, 'ck-box will-change-transform')}
        variant="card"
      >
        {children}
      &#x3C;/div>
    &#x3C;/CardContext.Provider>
  );
};

CKCardCollapsible.displayName = 'CKCardCollapsible';

CKCardCollapsible.Header = ({ children, className, ...rest }) => {
  const { isOpen, handleCardToggle } = useContext(CardContext);

  const headerClasses = classnames(
    cardPadding,
    'ck-card-header',
    'rounded-t-md transition-[border-radius] duration-300 ease-out',
    {
      'rounded-b-md': !isOpen,
    },
    className,
  );

  return (
    &#x3C;header {...rest} className={headerClasses}>
      &#x3C;button
        aria-label="toggle card"
        className="ck-card-header__toggle"
        onClick={handleCardToggle}
      />
      &#x3C;div className="min-w-0">
        {excludeChildrenByDisplayName({
          children,
          componentDisplayName: 'CKCardCollapsible.Trigger',
        })}
      &#x3C;/div>
    &#x3C;/header>
  );
};
CKCardCollapsible.Header.displayName = 'CKCardCollapsible.Header';

CKCardCollapsible.Body = ({ children, className, ...rest }) => {
  const { cardBodyMaxHeight, isOpen } = useContext(CardContext);

  return isOpen ? (
    &#x3C;div {...rest} className={className}>
      &#x3C;div
        className={classnames(cardPadding, {
          'ck-scrollbar ck-scrollbar--vertical':
            cardBodyMaxHeight !== 'max-content',
        })}
        style={{ maxHeight: cardBodyMaxHeight }}
      >
        {excludeChildrenByDisplayName({
          children,
          componentDisplayName: 'CKCardCollapsible.Footer',
        })}
      &#x3C;/div>
      {includeChildrenByDisplayName({
        children,
        componentDisplayName: 'CKCardCollapsible.Footer',
      })}
    &#x3C;/div>
  ) : null;
};
CKCardCollapsible.Body.displayName = 'CKCardCollapsible.Body';

CKCardCollapsible.Footer = ({ children, className, ...rest }) => {
  const footerClasses = classnames(
    'rounded-b-md',
    className,
  );

  return &#x3C;footer {...rest} className={footerClasses}>{children}&#x3C;/footer>;
};
CKCardCollapsible.Footer.displayName = 'CKCardCollapsible.Footer';
</code></pre>
<p>You can see how the component is wired up through context and how the sub-components are composed. The <code>CKCardCollapsible</code> component only has a handful of props that it passes down to context and mostly becomes a wrapper for its children. For instance, <code>CKCardCollapsible.Body</code> handles all the heavy lifting of computing heights for open and closed. The trigger component handles its toggling and whether to render a custom trigger or the default.</p>
<p>You will notice utilities  <code>excludeChildrenByDisplayName</code>, and <code>includeChildrenByDisplayName</code>. These are custom utilities that make it easier to work with children. They filter children by their <code>displayName</code>. This creates an interface akin to slots in Vue or Svelte, but with a full React props API.</p>
<p>The internals of the component may seem a bit more verbose, but it's mostly boilerplate code. The pay off comes with consuming applications have an exceptionally flexible API that is easy to reason about and to implement. When design or product has subtle variations to the card, the library is ready. Need a "New" badge on the header? Just make a flex layout in the header sub-component. How about a icon in the header? Similar process. Buttons in the footer with complex data-driven visibility logic? Just pass a component as a child to the footer sub-component.</p>
<p>This was the component that really became the a-ha moment for me. I realized that components could be more flexible to allow for more use cases. I also realized that the API could be more readable and more like HTML. This is the approach I've taken with all new components in ClearKit and have refactored a lot of existing components to use this API. We've even extended this approach to consuming apps as well to cover page layout components.</p>]]></content:encoded>
            <author>Zach</author>
            <category>JavaScript</category>
            <category>React</category>
            <category>Design Systems</category>
        </item>
        <item>
            <title><![CDATA[Compostable Code]]></title>
            <link>https://zachsteiner.com/posts/2023-10-31-compostable-code</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2023-10-31-compostable-code</guid>
            <pubDate>Tue, 31 Oct 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[I take pride in the craft of writing software. I constantly work to improve the quality of my code: to be more readable, more maintainable, etc. I want to writ]]></description>
            <content:encoded><![CDATA[<p>I take pride in the craft of writing software. I constantly work to improve the quality of my code: to be more readable, more maintainable, etc. I want to write code that is easy to understand and easy to update or extend. I also want to write code that will be easy to delete. Wait, what? Yes. Software for the web is ephemeral. That is unless you call something a "temporary hack", then it will live for a decade. Ordinarily, software serves its purpose and then is rewritten or removed as business needs change, architectures shift, etc. I used to liken writing software for the web as constructing a sand mandala: Tibetan Buddhist monks painstakingly construct a elaborate paintings with colored sand, and, then in a moment of detachment, it is swept away. This detachment from the output of my work is liberating. Occasionally this is the case, where I dramatically delete a whole feature or section of an application in a -1500 LOC pull request, but this is rarely the life cycle of code. Often a feature sits untouched for years until a bug is found or new functionality needs to be added. I now reach for a potentially less flattering metaphor: compost.</p>
<p>I've heard the term "bit rot" to describe code that hasn't been touched in some period of time. Often this means an app's dependencies are out of date such that is incompatible with the latest version of a library (e.g., a component library or Node). Bit rot can also apply to a piece of functionality that hasn't been touched or maybe even looked at in years. Sometimes it's hard to understand what the code does or how it interacts with the wider app, but it's important functionality that keeps chugging along. That is until we need to update React or the component library or even a subtle long standing bug is uncovered. Now this 1500 line hard to reason about component has now rotten to the point where it impacts the application. It slows down work at best and impacts users at worst. What if we wrote code that was easy to delete? Code that degrades gracefully rather than rots? This is where the compost metaphor comes in.</p>
<p>Code becomes like organic material that will degrade over time, but its pieces can be used to nourish the ecosystem of the application. Much like your banana peels and coffee grounds become soil for gardens and farms. So what does compostable code look like in practice? Honestly, it's very similar to writing code that is "maintainable", but with a different orientation. These examples are front-end web focused, but the principles apply to any software. Front-end web code tends to be more ephemeral than other software.</p>
<ol>
<li>Small functions that do one thing. Functions built from other smaller functions. These functions can be used in other places in the application, as well.</li>
<li>Presentational components that compose smaller presentational components. Rather than writing all the markup in one component, break it up into smaller components that compose together. It's easier to read and update. Components can be swapped in our out as needed. These smaller components can be used in other places in the application, as well.</li>
<li>Make page layouts composable. Avoid copying layout patterns across components. Avoid layout component that has a bunch of conditional logic to render different layouts. For example, rather than a <code>header</code> and <code>footer</code> prop on a layout, create small components for these that can be used across layouts. If your layout (or really any) component has a bunch of boolean props, it's probably better served by composition.</li>
<li>Composition over data-driven components. Rather than writing an array of 4 objects to map over to spit out some markup, create a presentational component and copy it 4 times. DRY isn't always the ideal.</li>
<li>Avoid coupling. Allow components to work together, but not be dependent on each other. This way components can be swapped out or removed without impacting other components.</li>
<li>Components that connect to data layer only connect data and then pass data to presentational components. This way the data layer can be swapped out without impacting the presentation layer or the other way around. This is a specific example of #4.</li>
<li>Prefer some boilerplate over abstractions. Abstractions are often useful, but they can be hard to reason about. Let's say you have an application with some integrations with APIs. You have five integrations, it may be worse to write an abstraction that only pays off if integrations expand to 100+. It's okay to repeat yourself sometimes.</li>
<li>Test code at lowest levels and integration points. It's important to make sure that the pieces work as intended themselves and then come together in the big picture. The integration tests may get replaced over the time, but the lower level tests will remain useful as long as the components remain in service.</li>
</ol>
<p>It is a small coincidence that composable and compostable are one letter different. It's a happy coincidence, but composition is really one big tool of how to get this this point.</p>
<p>These are just examples to demonstrate the idea. What I'm describing are just good architectural practices. However, thinking of the code as compostable helps detach you from your previous work. It encourages a growth mindset of constant improvement, but in a healthier way than just rewriting code in a fit of pique. Compostable code can be deleted, but it can be repurposed as well. This enables iterative refactoring and improvement without the (always) fraught process of rewriting. It's all about creating a healthy ecosystem for your application to grow and thrive.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Code</category>
            <category>React</category>
        </item>
        <item>
            <title><![CDATA[Goodbye Gatsby, Hello Next]]></title>
            <link>https://zachsteiner.com/posts/2023-04-05-goodbye-gatsby</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2023-04-05-goodbye-gatsby</guid>
            <pubDate>Sun, 23 Apr 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[It seems 4 years is the lifespan of a platform for this site. I liked Gatsby, but to be honest GraphQL was overkill for my needs. I really just need to render]]></description>
            <content:encoded><![CDATA[<p>It seems 4 years is the lifespan of a platform for this site. I liked Gatsby, but to be honest GraphQL was overkill for my needs. I really just need to render content from markdown files. Having become well acquainted with Next.js for work, I decided to take the plunge. In fact, I converted a Gatsby docs site to Next recently. The <a href="https://nextjs.org/docs/migrating/from-gatsby">migration guide</a> was really straightforward for that site. For this one? A bit less so.</p>
<p>All of the components were available, but I had to pull out all of the GraphQL queries for content and reimagine routing using Next's file-based routing and marry to having content in markdown files. It was a nice learning opportunity to better understand how <code>getStaticProps</code> works.</p>
<p>I also took the opportunity to migrate to TypeScript and remove Sass in favor of custom properties. I also finally rolled out logical properties for all my CSS. Not a visual refresh, but a total rearchitecting. Also, I now have the level of linting and auto-fixing that I enjoy at work.</p>
<p>Biggest challenge was at deployment to Netlify. It was a challenge to get it to work <code>next/image</code> and it's optimization. My first deploy had no images, so I had to do some digging to figure out how to get my images to not 404. Ultimately, it was a very silly thing. Gatsby publishes to <code>/public</code> and has assets in <code>/static</code> but <code>/public</code> is where Next serves assets. I had <code>/public</code> in my gitignore. Sigh...</p>]]></content:encoded>
            <author>Zach</author>
            <category>JavaScript</category>
        </item>
        <item>
            <title><![CDATA[Goodbye Jekyll]]></title>
            <link>https://zachsteiner.com/posts/2019-08-21-goodbye-jekyll</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2019-08-21-goodbye-jekyll</guid>
            <pubDate>Wed, 21 Aug 2019 00:00:00 GMT</pubDate>
            <description><![CDATA[Four years I've stuck by Jekyll. It's gotten more powerful, but it's also gotten much slower. I was experiencing 90 second build times and unpredictable hot re]]></description>
            <content:encoded><![CDATA[<p>Four years I've stuck by Jekyll. It's gotten more powerful, but it's also gotten much slower. I was experiencing 90 second build times and unpredictable hot reloading that would require yet another 90 second build. Using Jekyll for a <a href="/portfolio/odyssey-marketing">work project</a> really soured me on it, but I've not had time until now to take the plunge. Jekyll just isn't the development experience I was used to with Vue and React (but it sure beats Drupal!). Worse it was keeping me from updating my portfolio and writing.</p>
<p>I reviewed Hugo (it's fast) and VuePress, but landed on Gatsby. It was nice to practice React UI and I get an opportunity to learn about GraphQL. I also took the opportunity to clean up my markdown (particularly in the portfolio section). Hopefully, my content is now consistent and more portable for future work. Gatsby also gives me a better playground to explore layout ideas than Jekyll ever did.</p>]]></content:encoded>
            <author>Zach</author>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[UX Principles Article]]></title>
            <link>https://zachsteiner.com/posts/2015-07-13-ux-principles</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2015-07-13-ux-principles</guid>
            <pubDate>Mon, 13 Jul 2015 00:00:00 GMT</pubDate>
            <description><![CDATA[Rather than keeping it in my head and on various white boards, I moved my UX principles from a white board and various digital notes to my portfolio section. T]]></description>
            <content:encoded><![CDATA[<p>Rather than keeping it in my head and on various white boards, I moved my UX principles from a white board and various digital notes to my portfolio section. The principles are a synthesis of established UX thought, applications of psychological research, and personal experience.</p>
<p><a href="/portfolio/2015-07-13-ux-principles-portfolio/">Read the principles</a></p>]]></content:encoded>
            <author>Zach</author>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[Goodbye, WordPress]]></title>
            <link>https://zachsteiner.com/posts/2015-05-03-goodbye-wordpress</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2015-05-03-goodbye-wordpress</guid>
            <pubDate>Sun, 03 May 2015 00:00:00 GMT</pubDate>
            <description><![CDATA[I've had a WordPress blog since 2008. It originally lived on a very unreliable Ubuntu server that doubled as a media server and lived under my TV. I loved the]]></description>
            <content:encoded><![CDATA[<p>I've had a WordPress blog since 2008. It originally lived on a very unreliable Ubuntu server that doubled as a media server and lived under my TV. I loved the stereo received shaped case, but hated the upkeep. I quickly moved my WordPress to a hosting site. I tried a number of different themes (ranging from stock to gutted third party themes) over the years, but was always frustrated at the mess that is WordPress theming. I don't really need all of the extra stuff that WordPress offers like commenting, tagging, or elaborate sidebars. WordPress's speed always irked me, as I'm sure it did my occasional visitor.</p>
<p>What I really wanted was a way to create one off HTML pages with custom styles (like my <a href="http://projects.zachsteiner.com">Projects</a> page), but still have the maintainability of templating. Having heard a bit about <a href="http://jekyllrb.com">Jekyll</a>, I decided to give it a go. It seems to fit my manner of working quite a bit better than WordPress; balancing maintainability with control. It was easy to take over Jekyll's barebones default styling with custom Sass files, as opposed to creating a scratch WordPress theme or hazarding heavy edits to an existing one. I could use SVG where I wanted. I could use custom fonts as I wanted. I can write HTML where it makes sense and Markdown where it's easier. Incremental (or the eventual wholesale) redesigns become much easier. Basically, I get the benefits of a hand coded site without the maintainability downsides.</p>
<p>I was able to import my WordPress posts with minimal fuss into Jekyll in about 5 minutes. The URL scheme even matches, so deep links aren't broken. This allowed me to develop to my existing site, not just a lone "Hello Word" post.</p>
<p>I did find a few things challenging, but have been able to figure it out. First was trying to integrate <a href="https://github.com/postcss/autoprefixer">Autoprefixer</a>, an invaluable Post CSS plugin. I found Octopress's port of <a href="https://github.com/postcss/autoprefixer">Autoprefixer</a>, which does the trick. I'm also perplexed by Coda's inability to play nice with Jekyll. Every build seemingly creates a new version of the entire site, so a change to a Sass partial will tell Coda's FTP that every HTML file and image has changed. This is mildly annoying and makes publishing quite a bit more cumbersome. This is likely an artifact of how Coda watches for changes to know what to publish. I've since found that there are workarounds (namely around rsync or other tools that selectively sync to the server), but they seem to be more trouble than it is worth. For now, I will just dump everything on my server when I make changes.</p>
<p>Overall I'm happy with the workflow and it was fun to do a full redesign that I could actually implement. I'm not sure that I would have been able to make this happen with WordPress, without some serious cursing and tradeoffs. Or at least a lot of time. I'm looking forward to integrating some disparate areas of my site (particularly portfolio and music) into Jekyll as time permits.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Fixing iTunes Sync Slowness]]></title>
            <link>https://zachsteiner.com/posts/2014-09-26-fixing-itunes-sync-slowness</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2014-09-26-fixing-itunes-sync-slowness</guid>
            <pubDate>Fri, 26 Sep 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[I'm still in the 00s, I admit it. I have a large iTunes library that I manually sync with my music player (iPhone). This is because I have a very large music l]]></description>
            <content:encoded><![CDATA[<p>I'm still in the 00s, I admit it. I have a large iTunes library that I manually sync with my music player (iPhone). This is because I have a very large music library. All lossless. So large in fact, I'm way over the limit for iTunes Match by roughly 10k songs. I still selectively sync what music will fit on my iPhone. This will be the case until the probably iPhone 16, which will hopefully have TBs of storage. Or Pied Piper actually gets their compression algorithm to market.</p>
<p>Syncing and using iTunes was pretty annoying for two reasons.</p>
<ol>
<li>Syncing with my iPhone would take forever. It would take 5 minutes or more on Step 2 just checking which apps to sync. That's without even syncing music.</li>
<li>The few times when iTunes crashes, the restart brings on the dreaded:</li>
</ol>
<p><img src="/images/posts/2014/09/itunes-checking.jpg" alt="Checking iTunes"></p>
<p>which could last for 10 minutes or more.</p>
<p>The culprit for my setup? A bloated (500+ MB) iTunes Genius database. I deleted it and turned off Genius (don't really use it). Now syncs are blazingly fast and iTunes rebounds from crashes with aplomb.</p>
<p>If you have these same symptoms, try the following:</p>
<ol>
<li>Quit iTunes</li>
<li>Drag "iTunes Library Genius.itdb" to your desktop. It's found in [username]/Music/iTunes.</li>
<li>Restart iTunes. iTunes will recreate the file in a much smaller version (mine is now 41 KB)</li>
<li>Et voilà! You can delete the file on your desktop now.</li>
</ol>
<p>One note: if you use and like Genius, you will have to let iTunes rebuild information. Eventually your file will get big.</p>
<p>Now if iTunes + iPhone would sync via WiFi, like it's supposed to. A boy can dream...</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
            <category>Music</category>
        </item>
        <item>
            <title><![CDATA[Changing my Pizza Life]]></title>
            <link>https://zachsteiner.com/posts/2014-09-14-changing-my-pizza-life</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2014-09-14-changing-my-pizza-life</guid>
            <pubDate>Sun, 14 Sep 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[I have been making pizza for many years, but it has taken me a long time to up my game. I used to bake pre-made Trader Joe's dough blobs on a round metal pan w]]></description>
            <content:encoded><![CDATA[<p>I have been making pizza for many years, but it has taken me a long time to up my game. I used to bake pre-made Trader Joe's dough blobs on a round metal pan with holes in it. I eventually graduated to a pizza stone and then to preheating my stone.</p>
<p>The first simple step was making my own dough. I modified the <a href="http://www.nytimes.com/2012/04/18/dining/basic-pizza-dough-recipe.html">recipe</a> from <a href="http://amzn.com/0764578650">How to Cook Everything</a> by adding a heaping tablespoon of sugar. I've had great luck doing the same day hour-ish rise and letting it rest in the fridge overnight. I've found that this recipe makes dough for two good sized pies.</p>
<p><img src="/images/posts/2014/09/pizza2.jpg" alt="Smoking Goose Salame Piccante, Roasted Red and Yellow Peppers, Fresh Basil, and Fresh Mozzarella"></p>
<p>The second step was a <a href="http://bakingsteel.com">baking steel</a>. Oven at 500 degrees for an hour before baking gives an impressive char on the bottom. Our bottom heating gas oven works really well with it.</p>
<p><img src="/images/posts/2014/09/pizza4.jpg" alt="Bottom Charring"></p>
<p>The third is the simplest of all. Parchment paper. I've had a few disasters of sticking dough mangling my beautiful pie into a horrible inedible pile of goo. So disappointing when I've collected fantastic toppings. Last time this happened the a water main broke in the neighborhood, so the water was off. Staring at the mess of a kitchen I couldn't clean, I knew I had to find a better way. Cornmeal didn't work and was messy. Flour in sufficient  amount did unfortunate things to the taste. And it was messy. I can't remember where I saw or this or how I came upon this life changing method.</p>
<p>Simply put the pizza on a square of parchment paper between the pizza and the peel. I often cut the corners off, so that the paper is roughly a bit bigger than the pizza. Slide pie and parchment directly onto the steel/stone. It slides like a dream. The next part is important. <strong>Parchment paper will burn at 500 degrees</strong>. Not immediately, though. You have a few minutes for the dough to set up. It takes 2 minutes. Then simply grab the paper with a pair of tongs while holding the pie in place with a spoon or spatula. Bake the rest of the way and remove with your peel when done.</p>
<p>There is one other advantage parchment paper: easing the process making multiple pies. The "one in the oven, one on the peel" only works if you have two peels. You can transfer a pie to the parchment while one bakes, get it all topped and ready to go. You don't have wait to top the second pie until you have removed the first from the oven.</p>
<p><img src="/images/posts/2014/09/pizza3.jpg" alt="Cherry Peppers, Red Onion, and Salame Piccante from Smoking Goose."></p>]]></content:encoded>
            <author>Zach</author>
            <category>Food</category>
        </item>
        <item>
            <title><![CDATA[Font Cage Match, Or When the Same Font Isn't the Same Font]]></title>
            <link>https://zachsteiner.com/posts/2014-06-29-font-cage-match-or-when-the-same-font-isnt-the-same-font</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2014-06-29-font-cage-match-or-when-the-same-font-isnt-the-same-font</guid>
            <pubDate>Sun, 29 Jun 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[## A bit of context

We are working on a HTML refresh of our applications. Our CSS is completely em based (see [here][1] for a discussion of the advantages). O]]></description>
            <content:encoded><![CDATA[<h2>A bit of context</h2>
<p>We are working on a HTML refresh of our applications. Our CSS is completely em based (see <a href="http://css-tricks.com/why-ems/">here</a> for a discussion of the advantages). Our default font is Adobe's <a href="http://blog.typekit.com/2012/08/02/source-sans-pro/">Source Sans Pro</a>. It's reasonably attractive, open source, and readable at a variety of sizes. At the time I started using it, it was only available in OTF and TTF via <a href="http://sourceforge.net/projects/sourcesans.adobe/">Adobe's Source Forge</a>. They have since added a full web font stack (i.e., EOT, WOFF). I used Font Squirrel's converter rather than dealing with converting to EOT, WOFF, and SVG manually. Font Squirrel was allowed to host and generate @font-face CSS for Source Sans at the time, but no longer.</p>
<p><img src="/images/posts/2014/06/font-squirrel.png" alt="Font Squirrel"></p>
<p>I installed the Source Sans Pro TTF on my machine from Adobe's official repo. I happily wrote a bunch of CSS based on this version of the font, including some apparently fragile heights in my navigation bar:</p>
<p><img src="/images/posts/2014/06/nav-search.png" alt="Nav Search"></p>
<p>My layouts were a bit suspect after an <a href="http://zachsteiner.com/2014/06/icon-fonts-in-internet-explorer/" title="Icon Fonts in Internet Explorer">Internet Explorer debacle</a> involving the worst Internet Option in existence, but I remained blissfully ignorant until this week.</p>
<h2>An inch is not an inch</h2>
<p>After some reorganization of my development files, I noticed that line spacing on our sites was off. It looked like the oddities of Segoe UI, I had ignored in the past. Troubleshooting revealed that the @font-face files were not downloading due to a broken path variable in the SASS files. However, Source Sans Pro was still loading locally. A bit of head scratching and a moment of insight got me thinking that maybe the Font Squirrel version and Adobe version installed locally were different. I set up a test of this hypothesis...</p>
<p>I created two copies of my Source Sans @font-face partial. One used the same Font Squirrel generated files as I had, but I changed the font family to "Source Sans Pro Font Squirrel". I made a copy of the @font-face file, but pointed to a new directory called SourceSansProAdobe. I dumped all of the EOT, TTF, WOFF, and SVG files from Source Forge in there. I called that font family "Source Sans Pro Adobe". I created a new CSS file with only our <a href="http://meyerweb.com/eric/tools/css/reset/">reset</a> partial included and the two new @font-face partials. This is an example of the light weight:</p>
<pre><code class="language-css">@font-face {
  font-family: "Source Sans Pro Font Squirrel"
  src: url("{$location}/SourceSansPro-Light-webfont.eot");
    src: url("{$location}/SourceSansPro-Light-webfont.eot?#iefix") format("embedded-opentype"),
  url("{$location}/SourceSansPro-Light-webfont.woff") format("woff"),
         url("{$location}/SourceSansPro-Light-webfont.ttf") format("truetype"),
  url("{\$location}/SourceSansPro-Light-webfont.svg#SourceSansProLight") format("svg");
  font-weight: lighter;
  font-weight: 300;
  font-style: normal;
}
</code></pre>
<p>I color coded each of the different fonts with background color to see what was going on. The container div background was set to grey to show vertical alignment better.</p>
<p>What did I get? All kinds of weird. The Font Squirrel and Adobe version are very different in height and spacing around the glyphs. For the sake of argument, I decided to include Source Sans Pro from TypeKit. We didn't really have a compelling reason to use TypeKit because of the limited number of fonts we used that are just as easy to self-host and avoid the cost of additional javascript. Surely, TypeKit would appear the same as one of the others. Nope. It plays by its own rules as well.</p>
<p><img src="/images/posts/2014/06/font-comparison-osx.png" alt="All of the specimens, on OS X (which does not have Segoe UI installed)"></p>
<p>I then hopped over to my VM with IE9 for testing, so I could see what happens with Segoe UI in the mix.</p>
<p><img src="/images/posts/2014/06/font-comparison-ie9.png" alt="Specimens in IE9. There is no local Source Sans Pro in the Windows VM."></p>
<p>Honestly, none of the fonts in the mix behave the same when it comes to vertical spacing and alignment. What a mess!</p>
<p>The <a href="http://meyerweb.com/eric/tools/css/reset/">reset</a> we're using has most tags set to vertical-align: baseline, which is standard practice. Setting that property to vertical-align: bottom resolved some of the height issues, but we have g and p extending beyond the container. Not ideal, but it does solve some of our issues.</p>
<p><img src="/images/posts/2014/06/font-comparison-final.png" alt="Final Comparison"></p>
<p>Ultimately, we decided to switch to the Adobe version as it has greater consistency in layout when it needs to fallback to Segoe UI. I was also able to greatly reduce the complexity in my @font-face declarations:</p>
<pre><code class="language-css">@font-face {
  font-family: "Source Sans Pro"
  src: url("{$location}/#{$fontName}-Light.otf.woff") format("woff"),
  url("{$location}/#{$fontName}-Light.otf") format("opentype");
  font-weight: lighter;
  font-weight: 300;
  font-style: normal;
}
</code></pre>
<p>All of the EOT was for legacy IE8 and lower support, so we can ditch that. SVG was only for old iOS devices, of which there will dwindlingly few by the time this is truly available for mobile devices. That left us with WOFF (for most browsers) and OTF (for some older stragglers, mainly of the Android tablet variety).</p>
<p>I'm on the fence about the vertical-align in the reset. There are arguments for both bottom and baseline. I'm also being more vigilant about writing fragile height declarations into my layouts now. Now remains the fun of going through all of the designs that rely on this CSS to see what is broken. Thus far, thankfully, not a lot.</p>
<p>In the end, I don't really know why the same font distributed by three sources (including two from the same company and two from the same source file) would have different heights. Or why system fonts have heights defined differently than web fonts. Can't we all just agree on an inch being and inch? Or centimeters at least? Any typography experts want to weigh in? I will post any responses as an update.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Web</category>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[Icon Fonts in Internet Explorer]]></title>
            <link>https://zachsteiner.com/posts/2014-06-29-icon-fonts-in-internet-explorer</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2014-06-29-icon-fonts-in-internet-explorer</guid>
            <pubDate>Sun, 29 Jun 2014 00:00:00 GMT</pubDate>
            <description><![CDATA[[Icon fonts are amazing][1]. I wholeheartedly adopted them for our HTML redesign. That is, until I did a client demo that is. None of the icons loaded, strewin]]></description>
            <content:encoded><![CDATA[<p><a href="http://css-tricks.com/examples/IconFont/">Icon fonts are amazing</a>. I wholeheartedly adopted them for our HTML redesign. That is, until I did a client demo that is. None of the icons loaded, strewing Unicode boxes and Unicode fallbacks. It broke layouts. I was embarrassed and confused. "But  IE6 <a href="http://caniuse.com/#feat=fontface">supports @font-face</a>", I yelled! We tested in IE8, IE9, and even IE7 for yucks! I had set all of the *.eot declarations properly! There is almost nothing written about this in <a href="http://filamentgroup.com/lab/bulletproof_icon_fonts.html">discussions of icon fonts</a>. After our QA team going through methodically switching security settings one and a time and reloading, we discovered the worst setting in Internet Explorer:</p>
<p><img src="/images/posts/2014/06/ie-setting.png" alt="IE Font Setting"></p>
<p>Not only does our government client have this setting enabled as a custom security setting organization wide, but it persists into current versions of Internet Explorer (at least as of IE11). It's a Windows setting. This client does not allow other browsers whatsoever. Chalk this up for another tally in the <a href="http://css-tricks.com/icon-fonts-vs-svg/">SVG icons</a>. There goes are great idea to use icon fonts...</p>
<p>My first idea was to do a progressive enhancement refactor of our CSS. We also needed to detect whether the font was downloaded. Feature detection was out because all of the browsers with this setting enabled actually do support the feature, if their IT administrators would let them. We tried <a href="https://github.com/RoelN/font-face-render-check">detecting the font load</a>, which worked fine, but seemed awfully burdensome in that it needs to run on every page load.</p>
<p>What about SVG? My initial reaction to SVG was optimistic, but most of the methods that allow icon font level CSS control require the SVG to be inserted inline on the page and there is all sorts of wonkiness between browsers. Given the number of pages we need to support, I was very nervous about linking to individual SVG files and a big defs file has issues. I really wanted a CSS-based solution.</p>
<p>We finally settled on a variant of Lonely Planet's method of <a href="http://ianfeather.co.uk/ten-reasons-we-switched-from-an-icon-font-to-svg/">SVG sprites</a>. SVG are inserted as a background image to :before with background: cover. The height and width of the :before control the size of the icon. There are two draw backs to this method:</p>
<ol>
<li>Icons need to be the same aspect ratio (basically square) for background:cover to work well</li>
<li>There is no way to color the SVG via CSS as with icon fonts.</li>
</ol>
<p>We standardized our icons in <a href="http://icomoon.io">IcoMoon</a> at 32px x 32px. We then standardized colors (helps from a branding perspective) in the sprite. We abstracted naming, inspired by Bootstrap, with .c- prefix. Adding a color only requires adding a line like:</p>
<pre><code class="language-html">&#x3C;g fill="#ff7800" class="icon-alert" transform="translate(0 192)">
  &#x3C;use xlink:href="#icon"&#x3C;/use>
&#x3C;/g>
</code></pre>
<p>Global find and replace makes it easy to add the same line to all SVG files we use.</p>
<p><img src="/images/posts/2014/06/calender-sprite.png" alt="Calendar Sprite"></p>
<p>We would have loved to do one big sprite with all icons and multiple colors, but alas Firefox does not support background-position-x and background-position-y separately. Specifying 100 icons x 7 color classes was a non-starter. I would never want to specify something like ".icon-calendar-red" as one of 700+ classes. We settled on icon classes (.icon-calendar chooses the file) and a separate color class (.c-warning changes the background x position across all icons). Adding a new color only adds a single class definition, rather than 100+.  All the classes are built with a Sass partial: a map for the icon and a map + loop with clever division for the colors. This also allows a mixin to add the icon to arbitrary selectors with only:</p>
<pre><code class="language-scss">@include icon(calendar, warning);
</code></pre>
<h2>What about the rest of the text?</h2>
<p>All of this headache with IE made me very sensitive to degrading gracefully when fonts do not download. The default sans-serif doesn't degrade functionality like a icon based button does, but it did break delicate layouts. After some investigation and experimentation, I chose <a href="http://www.microsoft.com/typography/fonts/family.aspx?FID=331">Segoe UI</a> as a fallback for the text.</p>
<pre><code class="language-css">font-family: Source Sans Pro, Helvetica Neue, Segoe UI, Arial, sans-serif;
</code></pre>
<p>It has a variety of weights, including light, and bares a passing resemblance to Source Sans Pro (more so than Arial, at least). Best part is that it has come installed in all Office versions since <a href="http://www.microsoft.com/typography/fonts/product.aspx?PID=148">Office 2007</a> and is installed in <a href="http://www.microsoft.com/typography/fonts/product.aspx?PID=149">Windows Vista</a> and newer. That leaves out  our Windows XP + Office 2003 users. However, they have to use Firefox or Chrome (we no longer support IE8 and lower), so it's a non-issue: Source Sans Pro should load without trouble.</p>
<p>However, it always bothered me that line heights were very different between Source Sans Pro and Segoe UI. I will address this in a future post.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Web</category>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[My Old Fashioned]]></title>
            <link>https://zachsteiner.com/posts/2010-11-11-my-old-fashioned</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2010-11-11-my-old-fashioned</guid>
            <pubDate>Thu, 11 Nov 2010 00:00:00 GMT</pubDate>
            <description><![CDATA[Muddling oranges and cherries and adding sprite are common travesties that force me to avoid the Old Fashioned at bars. A stirred Manhattan on the rocks is a d]]></description>
            <content:encoded><![CDATA[<p>Muddling oranges and cherries and adding sprite are common travesties that force me to avoid the Old Fashioned at bars. A stirred Manhattan on the rocks is a decent substitute.</p>
<p>My Old Fashioned is a slight variation on this <a href="http://americandrink.net/post/1526699073/the-old-fashioned">excellent recipe</a>:</p>
<h2>Ingredients:</h2>
<p>2 oz of single barrel or small batch bourbon (I like <a href="http://www.liquorsnob.com/archives/2006/03/evan_williams_single_barrel_1996_review.php">Evan Williams 10 year</a> or <a href="http://www.liquorsnob.com/archives/2006/03/elijah_craig_small_batch_bourb.php">Elijah Craig</a> for a good reasonable bottle)</p>
<p>2 sugar cubes (better with raw/turbinado/demerara sugar)</p>
<p>2 to 3 dashes of Angostura Bitters</p>
<p>3 to 4 Maraschino cherries (Optional, but I like to eat them at the end of the drink. You can substitute a slice of lemon or orange.)</p>
<h2>Preparation:</h2>
<p>Add bitters to cubes in bottom of Old Fashioned glasses with a splash of water (or club soda if you have it). Crush sugar with bar spoon; then, stir until the sugar dissolves. Add bourbon and swirl to stir. A few ice cubes and cherries. Let ice melt as long as you can hold off, before enjoying.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Drinks</category>
        </item>
        <item>
            <title><![CDATA[Follow-up on Ask Mozoot]]></title>
            <link>https://zachsteiner.com/posts/2009-11-13-follow-up-on-ask-mozoot</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-11-13-follow-up-on-ask-mozoot</guid>
            <pubDate>Fri, 13 Nov 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[This is a follow up to my [previous post][1]

I received another suspicious text message about Ask Mozoot the other night. The next morning, I checked my AT&T]]></description>
            <content:encoded><![CDATA[<p>This is a follow up to my <a href="://zachsteiner.com/2009/10/shady-text-messaging-services/">previous post</a></p>
<p>I received another suspicious text message about Ask Mozoot the other night. The next morning, I checked my AT&#x26;T account and there was another $9.99 charge for Ask Mozoot. I called AT&#x26;T again and asked what was going on. It turns out that purchase blocking <em>is</em> available for iPhone. The previous customer service rep said it was not available. His misinformation was based on the fact that the blocking service does not work in iTunes, but does block all mobile "purchases," such as Ask Mozoot. iTunes does not trick you into buying things, so it's not an issue. I added purchase blocking to my account. Though, it's taking several calls and several confusing bills. Carriers should make purchase blocking (at least pin validated) default, but that would  cut into their profits (similar to this scheme http://pogue.blogs.nytimes.com/2009/11/12/verizon-how-much-do-you-charge-now/)</p>
<p><strong>My advice</strong>: Be proactive and add purchase blocking to your cell account. It's free for AT&#x26;T and I would imagine is for other carriers as well.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Update to Snow Leopard Bug]]></title>
            <link>https://zachsteiner.com/posts/2009-10-16-update-to-snow-leopard-bug</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-10-16-update-to-snow-leopard-bug</guid>
            <pubDate>Fri, 16 Oct 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[So it seems that there is an easier way to switch the network settings.

Create a new network location (I called mine "Wired Reset"). Switch location to "Wired]]></description>
            <content:encoded><![CDATA[<p>So it seems that there is an easier way to switch the network settings.</p>
<p>Create a new network location (I called mine "Wired Reset"). Switch location to "Wired Reset." Hit Apply. Switch back to "Automatic." Good news is that this can be accomplished via AppleScript. I know there are more elegant ways to script system preferences, but I tried another and it didn't reliably work.</p>
<pre><code class="language-javascript">**The script**

tell application "System Events"
tell application "System Preferences"
activate
reveal pane id "com.apple.preference.network"
end tell

    tell window "Network" of process "System Preferences"
        tell pop up button 1
            click
            pick menu item "Wired Reset" of menu 1
        end tell
        delay 1
        click button "Apply"
    end tell

    tell window "Network" of process "System Preferences"
        tell pop up button 1
            click
            pick menu item "Automatic" of menu 1
        end tell
        delay 1
        click button "Apply"
    end tell
    quit application "System Preferences"

end tell
</code></pre>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Shady Text Messaging Services called Mozoot]]></title>
            <link>https://zachsteiner.com/posts/2009-10-14-shady-text-messaging-services</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-10-14-shady-text-messaging-services</guid>
            <pubDate>Wed, 14 Oct 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[Examining my mobile phone bill today, I saw a suspicious charge called "Ask Mozoot Alerts" under the *Mobile Purchases & Downloads: Communication Charges* sect]]></description>
            <content:encoded><![CDATA[<p>Examining my mobile phone bill today, I saw a suspicious charge called "Ask Mozoot Alerts" under the <em>Mobile Purchases &#x26; Downloads: Communication Charges</em> section.</p>
<p>Somehow I was signed up for (I use passive voice because I do not recall or know if this was there was an action on my part that precipitated this) a text messaging service called <a href="http://mozoot.com/">MoZoot</a>. Users text them questions and they answer them; not terribly useful considering I have Google and the full internet available on my phone. Worse still, I never remember<br>
(or at worst never actually did sign up for it) signing up for this service. Unsolicited text messages aside, this service insidiously adds a $9.99 recurring montly charge to your mobile phone bill, until you opt out.</p>
<p>As far as I can tell, the vector of infection (I consider this despicable company not so much a service, but more malware/virus upon my mobile phone) was an unsolicited text message that I merely opened. Granted this could just be a symptom, meaning they got my number elsewhere and just texted to inform me that my phone has now been infected with their "service." It seems you have to reply with a "stop" text message, otherwise the monthly billing persists, despite never actually using the "service." Such services should only be opt-in, not opt-out.</p>
<p>It took two months to realize this (thanks to auto-billing), but fortunately AT&#x26;T refunded both months of charges. I didn't lose any money, but what a hassle. There was little information about this online, so I thought I would do this service for others similarly afflicted.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Snow Leopard Network Bug and the DHCP Two-step]]></title>
            <link>https://zachsteiner.com/posts/2009-09-14-snow-leopard-network-bug-and-the-dhcp-two-step</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-09-14-snow-leopard-network-bug-and-the-dhcp-two-step</guid>
            <pubDate>Mon, 14 Sep 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[**UPDATE**: [Updated and easier fix here][1].

I updated to OS X 10.6 Snow Leopard the day it came out. For the record, I take full responsibility for the pain]]></description>
            <content:encoded><![CDATA[<p><strong>UPDATE</strong>: <a href="/2009/10/update-to-snow-leopard-bug/">Updated and easier fix here</a>.</p>
<p>I updated to OS X 10.6 Snow Leopard the day it came out. For the record, I take full responsibility for the pains of being an early adopter and normally am quite patient with quirks of early releases. I don't want to complain (maybe I do a little bit), but I want to share a workaround (it's not a fix) for the issue I (and 10.6 using colleagues) been experiencing.</p>
<p>It would be charitable to call the wireless connection in my building flaky, so I rely on my ethernet connection to be productive. After updating to 10.6, I found that though getting an IP address, I cannot access the internet. Wired at home (Airport Basestation) and  elsewhere on campus, are no issue. It is just in my office; it is just with Snow Leopard. My officemate also on 10.6 has the same issue; Tiger and Leopard machines have no such difficulty. Simple things like renewing the DHCP lease doesn't work nor does turning off IPv6 (an original suspect) .</p>
<p>After a morning of beating my head against the wall... the workaround, the "DHCP Two-step":</p>
<ol>
<li>In Network system preference pane, select the Ethernet connection.</li>
<li>Copy the IP address that is displayed.</li>
<li>Switch to "Using DHCP with manual address" under Configure IPv4. Hit Apply.</li>
<li>Switch back to "Using DHCP." Hit Apply again.</li>
</ol>
<p>The internet should work now. However, when restarting or waking from sleep, be prepared to go back to Step 1. That is why this is not a fix (I see that as permanent), just a workaround. One that I hope will be remedied with a point update. It has not as of 10.6.1.</p>
<p>I have no idea why this happens, but I'm guessing there is a change in how Snow Leopard receives IP addresses that is somehow incompatible with older router hardware used in this part of the building. If any one reading this has a more permanent workaround than the the "DHCP Two-step" I've outlined above or has better understanding of why we are experiencing this problem in the first place, please drop a line in the comments. Here's hoping that Apple fixes this soon!</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Death to Folder Hierarchies]]></title>
            <link>https://zachsteiner.com/posts/2009-04-09-death-to-folder-hierarchies</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-04-09-death-to-folder-hierarchies</guid>
            <pubDate>Thu, 09 Apr 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[Receiving materials from a classmate has made the issue of folder hierarchies more salient to me. In these comprehensive exam materials, articles are arranged]]></description>
            <content:encoded><![CDATA[<p>Receiving materials from a classmate has made the issue of folder hierarchies more salient to me. In these comprehensive exam materials, articles are arranged by topic, which makes sense, but before that there is a dichotomous hierarchy imposed on the topics: industrial and organizational (I dislike this false distinction in my field, but that's another story). These higher order folders only have folders in them. This creates more drilling down every time I need to access articles. This creates the decision whether the topic I need is in "I" or "O." Then there are subfolders within topics, so I need to remember whether a topic is I or O AND whether it subsumed by another topic. This creates a lot of cognitive load.</p>
<p>Folder hierarchies seem to be a less and less relevant concept in modern computing, but they persist. Advanced searching and tagging obviate the need for deep folder hierarchies. Even rigorous files naming can help this when used with something like Spotlight. Though file browsers (Finder, Windows Explorer), still reinforce the folder paradigm AND duplicates for files. A better design would be a behind the scenes database design, like iTunes or iPhoto. Most people don't think about where their iTunes music is living in the File system, but it does keep it organized for transparency. I can imagine a Finder that more prominently brings in tags and "playlists" (smart or otherwise) that allow dynamic sorting of files without creating duplicates. I would only need one copy of articles that are used in a class, for the comp exam, and in my research. Hard drives are huge, but this would be more more efficient in space and organizing. A master database would manage this, as iTunes does, all behind the scenes. Tags, searching, and smart folders will create as needed organization without forcing the user to have to enforce and remember an organization. Everything will be searchable. The organization can be changed as often as needed. Finish the comp? Just remove the smart folder, but the files (and tags) are still there for later reference. Leopard's Finder does have smart folders based on Spotlight, but it's very limited and doesn't allow the kind of folder free organization I crave.</p>
<p>UPDATE: I guess this topic is in the air as the excellent UI blog, ignore the code, just <a href="http://ignorethecode.net/blog/2009/04/09/the-desktop-metaphor/">posted</a> on this topic. By the way, I love the recent redesign to his blog.</p>]]></content:encoded>
            <author>Zach</author>
            <category>UX</category>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Follow Up to SPSS Woes]]></title>
            <link>https://zachsteiner.com/posts/2009-03-09-follow-up-to-spss-woes</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-03-09-follow-up-to-spss-woes</guid>
            <pubDate>Mon, 09 Mar 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[I figured what was going on with the inaccurate scatter plot from the Chart Builder. For some reason, the variables were being treated as Nominal, despite the]]></description>
            <content:encoded><![CDATA[<p>I figured what was going on with the inaccurate scatter plot from the Chart Builder. For some reason, the variables were being treated as Nominal, despite the variable view showing them as Scale!</p>
<p><img src="/images/posts/2009/03/variables2.png" alt="Variables"></p>
<p>This may be an artifact of an import from SAS, but the Legacy Dialogs and Descriptives treat both variables as if they are Scale. Even though, I was able to get a correct graph by changing the variables to Nominal and back to Scale, I am left just as befuddled how this even happened in the first place. It seems that you may be better off using Excel (or Numbers!) for graphing.</p>
<p>This error really calls into question most analyses I (and others) run in SPSS. Considering that much of the published research in psychology relies on the accuracy of SPSS in reporting statistics, I am very concerned.</p>
<p><strong>Moral of the story</strong>: Use <a href="http://www.sas.com/">SAS</a> or <a href="http://www.r-project.org/">R</a>. Unfortunately, I can't use SAS (it hasn't had a Mac version since OS 9). Maybe it is high time to give R a serious look, but the lack of GUI makes for a huge learning curve.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Psychology</category>
            <category>Statistics</category>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[What is wrong with the Chart Builder in SPSS?]]></title>
            <link>https://zachsteiner.com/posts/2009-03-07-spss-wtf-moment</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-03-07-spss-wtf-moment</guid>
            <pubDate>Sat, 07 Mar 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[I am the lab instructor for a graduate psychology statistics course. For my class's homework, they were required to figure out if a linear regression transform]]></description>
            <content:encoded><![CDATA[<p>I am the lab instructor for a graduate psychology statistics course. For my class's homework, they were required to figure out if a linear regression transformation was warranted for a sample data set. When assessing the skewness of a distribution, scatterplots are invaluable. Unfortunately, SPSS does not render them consistently even within itself. The data set is very skewed and requires natural log transformations for both X and Y to not violate assumptions of linear regression.</p>
<p>Consider first the chart editor, the option SPSS prefers by hiding the "legacy dialogs" inside a submenu. I suspect that eventually these will be altogether removed from the menus, leaving the commands available via syntax for backwards compatibility. The following syntax generated this graph:</p>
<pre><code class="language-javascript">GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=BODY BRAIN MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL

SOURCE: s=userSource(id("graphdataset"))
DATA: BODY=col(source(s), name("BODY"), unit.category())
DATA: BRAIN=col(source(s), name("BRAIN"), unit.category())
GUIDE: axis(dim(1), label("BODY"))
GUIDE: axis(dim(2), label("BRAIN"))
ELEMENT: point(position(BODY\*BRAIN))

END GPL.
</code></pre>
<p><img src="/images/posts/2009/03/chartbuilder2.png" alt="Chartbuilder"></p>
<p>It looks like a linear relationship, right? Wrong! The scale is grossly off. The Body variable actually goes up over 6000. The true relationship is shown through the hidden (supposedly deprecated) legacy dialogs. Which is rendered with much more parsimonious syntax:</p>
<pre><code class="language-javascript">GRAPH
/SCATTERPLOT(BIVAR)=BODY WITH BRAIN
/MISSING=LISTWISE.
</code></pre>
<p><img src="/images/posts/2009/03/legacy2.png" alt="Scatterplot"></p>
<p>Just for yucks, I copied the data in <a href="http://www.apple.com/iwork/numbers/">Numbers</a>, a <strong>consumer</strong> spreadsheet app from Apple. It even does a better job of displaying the data than SPSS's chart builder.</p>
<p><img src="/images/posts/2009/03/numbers12.png" alt="Numbers"></p>
<p>I am left with a lot of questions and few answers. Why did the chart builder cut off those points at the upper range that radically skew the data set? Why is the syntax for the chart builder 4 times as long as the old charting syntax? Why can't the chart builder do as accurate a graph as a "baby" spreadsheet app? What is going on here?</p>]]></content:encoded>
            <author>Zach</author>
            <category>Psychology</category>
            <category>Statistics</category>
            <category>UX</category>
        </item>
        <item>
            <title><![CDATA[My new reconstruction of SMiLE]]></title>
            <link>https://zachsteiner.com/posts/2009-01-31-my-new-reconstruction-of-smile</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2009-01-31-my-new-reconstruction-of-smile</guid>
            <pubDate>Sat, 31 Jan 2009 00:00:00 GMT</pubDate>
            <description><![CDATA[**UPDATE** 4 February 2010: I have posted the tracks [on my projects site][1]

I recently did a new reconstruction (remix or completion if you will) of  The Be]]></description>
            <content:encoded><![CDATA[<p><strong>UPDATE</strong> 4 February 2010: I have posted the tracks <a href="https://projects.zachsteiner.com/smile.html">on my projects site</a></p>
<p>I recently did a new reconstruction (remix or completion if you will) of  The Beach Boys's SMiLE from vintage 1967 tracks into a cohesive piece. If you are unfamiliar with SMiLE see <a href="http://en.wikipedia.org/wiki/Smile_(Beach_Boys_album)">here</a> and <a href="http://en.wikipedia.org/wiki/Smile_(Brian_Wilson_album)">here</a>. This is my third crack at doing this. The first time was before Brian Wilson Presents SMiLE sometime in 2003. I found some random constructions all in mp3 and put it together from that. My second attempt was after Brian Wilson Presents SMiLE using mp3 versions of the Good Vibrations boxset and tracks from my first attempt and a bit better understanding of DAWs (I was still using Acid at that point). That version was pretty good (a hybrid between my first version and the 2004 version), but it always annoyed me that it was in mp3, so now I have done a lossless version. I have tracked down the best sounding versions (all lossless) that I could. Now I have and can use Logic, so the result flows and has a bit more polish (aside from the rough recording quality on some pieces).</p>
<p>I take some cues from the 2004 SMiLE and other constructions, but have left out awkward vocal overdubs, using clips from the modern recording, and digital pitch correction (please don't make the Beach Boys sound like Kanye West!). I respect the effort of <a href="4">PurpleChick</a>, but am not a fan of her version. I wanted He Gives Speeches, more Bicycle Rider, an ending with Surf's Up, among other pieces not included with 2004 recording. The end results is not a slavish recreation of Brian Wilson Presents SMiLE with vintage tracks nor a purely historical recreation of what might have been had the album seen release in '67. This is personal and reflects my avant-garde aesthetics (but they aren't hard to find in the original tracks), a fondness for instrumentals/extended pieces, and a bit of revisionist history.</p>
<p>Here are some notes by track. The source(s) for each track are in parentheses.</p>
<ol>
<li>Our Prayer / Gee (Good Vibrations Box). I like the pairing and transitions on Brian Wilson Presents SMiLE. All sourced from the Good Vibrations Box, so it all sounds great (especially Our Prayer).</li>
<li>Heroes and Villains (Smiley Smile Bonus Track). I love this version (the Cantina version). It's weirder than the released version: love the tape feedback in the middle. It fits SMiLE more than the single version.</li>
<li>Do You Like Worms (Good Vibrations Box). This is nice and stretched out.</li>
<li>Barnyard (SMiLE Vigotone). Has the vocals/animal sounds and decent sound, though it's probably one of the worst songs from a fidelity standpoint. There is not much to be done here.</li>
<li>Old Master Painter / You are my Sunshine (SMiLE Vigotone + Mark Linnet Mix). The Old Master Painter sounded better on the Vigotone and Sunshine better on the Mark Linnet Mix. The vocals are much more audible. The sax ending is better on Vigotone. All crossfaded seamlessly.</li>
<li>He Gives Speeches (Mok SMiLE). I'm not sure where he got this version, but it sounds great. I like the slapback delay on the handclaps.</li>
<li>Wonderful (Good Vibrations Box). Wonderful sound for a wonderful song.</li>
<li>Child is Father of the Man (Mok SMiLE). Not sure where he got this version, but it sounds better than the Vigotone and has quite a bit more on the end. I did some editing to remove a vocal section I didn't like towards the end. These pieces are so wonderfully modular that you can do this.</li>
<li>Cabinessence (Good Vibrations Box). This is my favorite version of this song. I love the droning trombone.</li>
<li>Bicycle Rider (Good Vibrations Box + SMiLE Vigotone). This is my experimental track; a home for all the avant-garde or drugged out (depending on your bias) snippets. I incorporate  George Fell into his French Horn (I had to include some of it!) and bits of Heroes and Villains with the  Bicylce Rider Theme. I apologize for the creative liberties, but this is my reconstructions.</li>
<li>Good Vibrations (Smiley Smile). The original and best. I toyed with the idea of using the stretched out instrumental version from the sessions, but why mess with perfection? The best way to end side one, as likely would have had it been released in '67.</li>
<li>Look (Mark Linnet Mix). No vocals, but I like this as an instrumental. I did a bit of editing to take a section that interrupted the flow towards the end. This piece is full of abrupt cuts, so it works. I think mine is much smoother.</li>
<li>Vegetables (Good Vibrations Box). This is a nice version. It may not be my favorite, but it sounds great. The ending piano segues nicely into...</li>
<li>I Want to be Around / Workshop (SMiLE Vigotone). This the best sound I've heard for this one. I don't like the vocals on Brian Wilson Presents SMiLE. It works much better as an instrumental, much weirder.</li>
<li>Holiday (Mark Linnet Mix). Another instrumental. The sound is pretty good, better than a lot of versions I've heard. The piano towards the end is pretty cool. I'm not sure that I like the vocal version of this one either.</li>
<li>Windchimes (Good Vibrations Box). Great song, great sound. If only it had the great drum part of Brian Wilson Presents SMiLE, though the piano ending is fantastic.</li>
<li>Mrs. O'Leary's Cow (Mok SMiLE + Mark Linnet Mix). This may be my favorite piece on SMiLE. The tasty intro from Mok just makes it all the weirder. There are versions I like ever so slightly better (like on Archeaology, but this one sounds the best).</li>
<li>I Love to  Say Dada (Unsurpassed Masters Vol. 17 - SMiLE Sessions + Good Vibrations Box). The water chant comes from syncing the backing organ drone take with a wonderful stereo vocal take. The I Love to Say Dada comes from Good Vibrations. I prefer the instrumental and wordless vocals over the lyrics.</li>
<li>Can't Wait Too Long (Smiley Smile). This is not a true SMiLE track as it was recorded later, but it's wonderful and fits the feel of the end. The production and vocals are just to die for. The outro is fantastic. It was included on the Mark Linnet tapes, as well.</li>
<li>Surf's Up (Mok SMiLE). The arrangement is stretched out and suitably baroque to end the album and the sound is great. I feel Brian Wilson Presents SMiLE did this song a disservice by not ending with it. The solo Brian version is still my favorite, but this fits best. Though I love it and it sounds great, the solo piano is a bit too austere to fit with the rest of the album.</li>
</ol>]]></content:encoded>
            <author>Zach</author>
            <category>Music</category>
        </item>
        <item>
            <title><![CDATA[10,000 Hours (Yeah, right, Malcolm!)]]></title>
            <link>https://zachsteiner.com/posts/2008-11-17-10000-hours-yeah-right-malcolm</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-11-17-10000-hours-yeah-right-malcolm</guid>
            <pubDate>Mon, 17 Nov 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[Malcolm Gladwell's [new][1] book *Outliers *makes the audacious claim that 10,000 hours of practice is the magic number for obtaining expertise ([from Daring F]]></description>
            <content:encoded><![CDATA[<p>Malcolm Gladwell's <a href="http://www.amazon.com/Outliers-Story-Success-Malcolm-Gladwell/dp/0316017922/ref=pd_bbs_sr_1?ie=UTF8&#x26;s=books&#x26;qid=1226959637&#x26;sr=8-1">new</a> book *Outliers *makes the audacious claim that 10,000 hours of practice is the magic number for obtaining expertise (<a href="http://daringfireball.net/linked/2008/11/15/10000-hours">from Daring Fireball</a>)</p>
<p>Having done a fair amount of reading on expertise (though it is not my research area), I can say the 10,000 hours figure for expertise attainment is arbitrary and quite problematic. Psychological research generally shuns these magic numbers because they oversimplify reality. I don't fault Gladwell for wanting to simplify the dizzying complexity of research in this area, but I do take issue with his tendency to make claims that are not warranted from the research, yet claiming support from the research. This is quite disingenuous for someone with such clout. More on Gladwell, later...</p>
<p>The 10,000 hours figure does not coincide with the cognitive psychology research into expertise. The rule of thumb is that 10 years of effortful practice is needed for expertise. I will stress that this rule of thumb (or average tendency) is not the same as a magic number because there is considerable variance in human behavior contributed by existing experience (e.g., expertise in another area) and natural ability, among other things. I have played saxophone for over 10 years, but I do not have the natural ability and have not put forth effortful practice enough to become an expert. Thus, I am still a dilettante after easily 10,000 hours playing and 10 years of practice. Others have become expert saxophonists in less time. That does not change that the average tendency is still about 10 years.</p>
<p>The figure of 10,000 hours does not match up with the 10 year standard supported by the research. Say someone devotes 40 hours a week to practice (as if it were a full time job) that comes out to 4.8 years. Devoting 30 hours a week comes out to 6.4 years. Neither of these are reasonably close to 10 years. I'm not saying that it is impossible to achieve expertise in less than 10 years, but it is the average amount of time needed.</p>
<p>Lest you think Mr. Gruber believes this figure: when I sent a version of this post, he replied with a quote from Merlin Mann: "If I were half as smart as Malcolm Gladwell, I'd already have statistics and a clever name for my theory that he's mostly full of shit." (from <a href="https://twitter.com/hotdogsladies/status/1008729697">here</a>).</p>]]></content:encoded>
            <author>Zach</author>
            <category>Psychology</category>
        </item>
        <item>
            <title><![CDATA[Fixing Computers]]></title>
            <link>https://zachsteiner.com/posts/2008-09-08-fixing-computers</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-09-08-fixing-computers</guid>
            <pubDate>Mon, 08 Sep 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[As the de facto tech support for my grad program, roommate, and family (which is weird with a PhD electrical engineer and a M.A. in technology education), I am]]></description>
            <content:encoded><![CDATA[<p>As the de facto tech support for my grad program, roommate, and family (which is weird with a PhD electrical engineer and a M.A. in technology education), I am often greeted with awe and wonder (well, not by the immediate family). Not that this is not nice, it is not exactly warranted. It's not really a superhuman ability; it's really just old fashioned trial and error. It's really like the science I teach and purport to practice: form a hypothesis, collect some data, and see if the hypothesis is correct. With computers, you often get less data, but your feedback is a lot more immediate. When my roommate's laptop wasn't connecting to the network, I hypothesized it had something to do with the IP address. The hypothesis was confirmed, but that brings up the frustrating thing about computers. I started doing the thing that ultimately worked, but it didn't work for another 15 minutes of trying that same thing. That brings me to biggest part of fixing computers. Bigger than knowledge. Bigger than this hypothesis testing metaphor. You have to sit there and click the same thing or do the same sequence over and over until it works. It's not a superhuman ability or knowledge it's just patience, persistence, persnickety, and borderline autism.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Computers</category>
        </item>
        <item>
            <title><![CDATA[Presidential Word Counts]]></title>
            <link>https://zachsteiner.com/posts/2008-09-05-presidential-word-counts</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-09-05-presidential-word-counts</guid>
            <pubDate>Fri, 05 Sep 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[It's conference submission season around our department, so I am very sensitive to word counts. I noticed during the primary that Obama's website featured leng]]></description>
            <content:encoded><![CDATA[<p>It's conference submission season around our department, so I am very sensitive to word counts. I noticed during the primary that Obama's website featured lengthy policy documents in PDF format, whereas McCain and Clinton had much skimpier policy positions on their respective sites. The goal of this study was to see if there were actual differences in the quantity of information provided by the two major party candidates for president.</p>
<h2>METHOD</h2>
<p>I selected seven issues that are commonly discussed and/or I (and many Americans during this election) feel are important in this election. I did a word count the two campaigns provide on their websites <a href="http://www.johnmccain.com">http://www.johnmccain.com</a> and <a href="http://www.barackobama.com">http://www.barackobama.com</a> about these issues: Economy, Education, Energy, Foreign Policy, Health Care, Iraq, and Technology. I computed the total number of words devoted to these issues, as well as a mean word count across the 7 issues. For the Obama site, I counted only the PDF policy paper and not the site, so the actual word count is quite a bit higher. The content is redundant from the main policy paper, so it's unfair to include the summary page in the word count. Lastly, I counted the total number of issues discussed on each campaign site.</p>
<h2>RESULTS</h2>
<h3>Economy:</h3>
<p>McCain: 5,669
Obama: 6,767</p>
<h3>Education:</h3>
<p>McCain: 1,945
Obama: 8,983</p>
<h3>Energy:</h3>
<p>McCain: 1,997
Obama: 4,639</p>
<h3>Foreign Policy:</h3>
<p>McCain: 3,025 (McCain calls Foreign Policy "National Security.")
Obama: 10,922</p>
<h3>Health Care:</h3>
<p>McCain: 1,388
Obama: 6,839</p>
<h3>Iraq:</h3>
<p>McCain: 1,433
Obama: 1,395</p>
<h3>Technology:</h3>
<p>McCain: 4,518
Obama: 5,320</p>
<h3>Sum / Mean:</h3>
<p>McCain: 19,975 / 2,854
Obama: 44,865 / 6,409
The difference between these means is statistically significant, t(12) = -2.69, p = .02.</p>
<p>Total Issues Addressed:
McCain: 18 (12 of which are accessible from the main menu)
Obama: 30 (24 of which are accessible from the main menu)</p>
<p><img src="/images/posts/2008/09/wordcount_graph.png" alt="Wordcounts"></p>
<p>McCain is red and Obama is blue.</p>
<h2>DISCUSSION</h2>
<p>From these results, the McCain has roughly half the content as the Obama campaign on comparable issues and 12 fewer issues touched upon. Though quantity does not equal quality and this brief descriptive study doesn't purport to measure quality or thoroughness of position, it is more far more likely to thoroughly cover an issue with more words than fewer. The McCain site does not provide dedicated policy papers for any issue beside the economy; the policy positions are offered in a few sentences per heading on the issue page. The Obama site offers dedicated policy papers in PDF format for most issues, including "Additional Issues" such as the arts (919 words). These six additional issues are in addition to the 24 that are accessible from the main site menu. Furthermore, the Obama policy papers often feature references, with which the McCain campaign's sole policy paper (The Economic Plan) does not bother. It definitely appears from this narrow study that the Obama campaign is putting it's money where it's mouth is in respect to making this a campaign about the issues. I only wish McCain's campaign would step up and honestly debate Obama on the issues or at the very least let the American public know where he stands on a broader swathe of issues.</p>
<p>(This post is 545 words. It would be under the limit for a symposium at SIOP, but the statistics are probably skimpy.)</p>
<p>UPDATE: I created an [interactive data visualization][http://portfolio.zachsteiner.com/wordcounts] for the 2016 election. It includes the data from this post as a comparison to the current candidates.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Politics</category>
            <category>Statistics</category>
        </item>
        <item>
            <title><![CDATA[Os Mutantes: Parody or Tribute?]]></title>
            <link>https://zachsteiner.com/posts/2008-08-15-os-mutantes-parody-or-tribute</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-08-15-os-mutantes-parody-or-tribute</guid>
            <pubDate>Fri, 15 Aug 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[![Os Mutantes](/images/posts/Mutantes_Cometas.jpg)

In honor (or despite of) McDonald's use of "A Minha Menina" in a Olympics commercial, I wanted to share som]]></description>
            <content:encoded><![CDATA[<p><img src="/images/posts/Mutantes_Cometas.jpg" alt="Os Mutantes"></p>
<p>In honor (or despite of) McDonald's use of "A Minha Menina" in a Olympics commercial, I wanted to share some thoughts about Os Mutantes. Despite being championed by Beck and David Byrne Os Mutantes are still relatively little known, so exposure in a commercial is a good thing. Even those hip to the band don't know much beyond their eponymous first album, from which "A Minha Menina" was taken. This may well be their best album, but the subsequent 3 albums are almost its equal. In particular, I want to mention their fourth album, <em>Mutantes E Seus Cometas No País Do Baurets</em>.</p>
<p>I have not seen any critic make the following important observation about the album: it functions simultaneously as brilliant parody and loving tribute, all while being great music on it's own terms. The album achieves a similar end as The Flight of the Conchords, (maybe) sans the hilarious lyrics. Just as the Conchords lovingly send up various genres, so did the Mutantes decades prior. As I don't know Portuguese, I am only able to pick up musical parody, rather than lyrical. Musical parody is an important dimension of The Flight of the Conchords, without which the lyrics would not be as funny or successful.</p>
<p>From the opening track, which echoes British Invasion live recordings (e.g., The Kinks' <em>Live at Kelvin Hall</em>) that have screaming girls as a featured instrument often overtaking the band. Then there is "Cantor de Mambo," which sounds like a Santana song off <em>Abraxas</em>, right down to the perfect parody/emulation of a Satana guitar solo. Or "Balada Do Louco" which is a perfect McCartney ballad sung in Portuguese.</p>
<p>Often times, the tribute sounds loving as in the Santana-esque tune, but it often turns a bit wicked. Take "Balada Do Louco," which erupts with obtrusive and obnoxious Wings-esque synthesizer (as over powering and pointless as Linda's worst) or ends with a parody of the Beatles' Indian experiments. Then there is "A Hora E A Vez Do Cabelo Nascer" which mimics Page's power chords and Plant's vocals perfectly, but skewer Zeppelin with seemingly endless false endings and veer into the ridiculous with a hacking cough mixed with the singing. Last but not least is the epic "Mutantes E Seus Cometas No País Do Baurets" that sends up British jazzy prog rock bands like The Soft Machine or post-Syd (but pre-Darkside) Pink Floyd. You can hear an even more bizzarro Robert Wyatt channeled in the scatting mid-track. To add to the experimental soundscape excesses of the song, the song ends with a proggy jam on "Powerhouse."</p>
<p>The Mutantes were very affectionate in their emulation of British bands of the 60s and 70s (see their almost cover of the Stones's "You Can't Always Get What You Want" on _ A Divina Comédia Ou Ando Meio Desligado's_ "Haleluia"), but this album balances brilliantly between tribute and parody of the excesses and/or genre conventions. The songs would not work if they weren't strong on their own right (as are the Conchords's) or the bands instrumental and vocal prowess. It's hard enough to mimic Macca or Plant in English let alone Portuguese. There are certainly more tributes/parodies lurking in this album that aren't apparent to my musical experience. For any rock music dork, this is a treasure trove.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Music</category>
        </item>
        <item>
            <title><![CDATA[Old Politicians never die]]></title>
            <link>https://zachsteiner.com/posts/2008-06-07-old-politicians-never-die</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-06-07-old-politicians-never-die</guid>
            <pubDate>Sat, 07 Jun 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[;they only get forwarded...

I was curious about what happens to old political websites when the candidate has not been running for several years. All of the 2]]></description>
            <content:encoded><![CDATA[<p>;they only get forwarded...</p>
<p>I was curious about what happens to old political websites when the candidate has not been running for several years. All of the 2008 candidates still have active sites because it's too soon, but what of the 2004 candidates?</p>
<p><a href="http://www.howarddean.com">www.howarddean.com</a> forwards to the wikipedia page about Howard Dean. Much easier than maintaining a page!</p>
<p><a href="http://www.wesleyclark.com">www.wesleyclark.com</a> forwards to a strange site called "Common Nonsense." However, <a href="http://www.clark04.com/">www.clark04.com </a> is still an active page.</p>
<p><a href="http://www.denniskucinich.com">www.denniskucinich.com</a> oddly goes to a 2004 page, not page for his more recent bid for president.</p>
<p>Strangest of all, <a href="http://www.johnkerry.com">www.johnkerry.com</a> goes nowhere. I guess it's fitting...</p>]]></content:encoded>
            <author>Zach</author>
            <category>Politics</category>
        </item>
        <item>
            <title><![CDATA[This really happened...]]></title>
            <link>https://zachsteiner.com/posts/2008-06-03-this-really-happened</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-06-03-this-really-happened</guid>
            <pubDate>Tue, 03 Jun 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[at my high school. This explains any grammatical or spelling mistakes; I had teachers like this. Yes, I did have "Crapo" in high school. This is not exaggerati]]></description>
            <content:encoded><![CDATA[<p>at my high school. This explains any grammatical or spelling mistakes; I had teachers like this. Yes, I did have "Crapo" in high school. This is not exaggeration. Names have been changed to protect the guilty.</p>
<p>I present for your enjoyment a brief vignette of high school English class, in dramatic form:</p>
<p>SCENE: A classroom. Students are reading a great piece of literature.</p>
<p>BILLY: Hey, this story we're reading kinda has parallels to the story of David and Goliath. Is that intentional?<br>
CRAPO: Um, I doubt it.<br>
BILLY: Well, [Lists a bunch of specific reasons].<br>
CRAPO: Well, lemme call Muchiniski.<br>
MUCHINISKI [over the phone]: Um, I'm teaching now, talk later. [Hangs up]<br>
CRAPO: Huh. Billy, you and um... Beth, go over there and ask him.</p>
<p>SCENE: Another classroom.</p>
<p>MUCHINISKI [to BILLY and BETH]: Tell her that if she's going to be an English teacher, she should read one of history's most important books, the Bible. Tell her that.<br>
BILLY and JANE: Um...<br>
MUCHINISKI: Just tell her to call me.</p>
<p>SCENE: First classroom.</p>
<p>CRAPO [on the phone] Oh. huh. um... ok.<br>
CRAPO [to class]: He said I should, um, read the Bible. HAHA, isn't that cute?<br>
JAMES [who's apparently a vocal atheist, whom CRAPO really respects, her golden boy for the year]: Uh, yeah, even I've read the bible.<br>
CRAPO: [Flushes]<br>
JAMES: Mrs. Crapo, you can borrow mine.</p>
<p>UPDATE: The names were changed to be more interesting, but still protect the guilty.</p>]]></content:encoded>
            <author>Zach</author>
            <category>General</category>
        </item>
        <item>
            <title><![CDATA[Guns versus Gas]]></title>
            <link>https://zachsteiner.com/posts/2008-05-23-guns-v-gas</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-05-23-guns-v-gas</guid>
            <pubDate>Fri, 23 May 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[[Why Guns Over Gas?][1]

The answer is not, "Because they are American..." This is consistent with what we know about rewarding people in the workplace. If you]]></description>
            <content:encoded><![CDATA[<p><a href="http://wheels.blogs.nytimes.com/2008/05/23/why-guns-over-gas/index.html">Why Guns Over Gas?</a></p>
<p>The answer is not, "Because they are American..." This is consistent with what we know about rewarding people in the workplace. If you offer someone a vacation or a bonus to their paycheck, they will more likely pick the vacation. It's not something they would regularly pay for on their own and "feels" more like a reward than a bonus, which would likely go toward something mundane like a paying credit card debt or getting ahead on a car/mortgage payment. Offering the choice, though makes the vacation less appealing because of guilt. This gets into the psychology of choice. Interestingly, people tend to prefer less choice to more choice. Research into consumer choice would indicate that Apple's offering of computers is more optimal than, say, Dell's. People look more when there is choice, but buy more when there is less choice. For more see Shah &#x26; Wolford, 2007 in <em>Psychological Science</em>.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Psychology</category>
        </item>
        <item>
            <title><![CDATA[Justice]]></title>
            <link>https://zachsteiner.com/posts/2008-05-21-justice</link>
            <guid isPermaLink="false">https://zachsteiner.com/posts/2008-05-21-justice</guid>
            <pubDate>Wed, 21 May 2008 00:00:00 GMT</pubDate>
            <description><![CDATA[I study organizational justice as part of my research. Organization justice perceptions influence a lot of important outcomes in the workplace. For instance, a]]></description>
            <content:encoded><![CDATA[<p>I study organizational justice as part of my research. Organization justice perceptions influence a lot of important outcomes in the workplace. For instance, a worker who feels as if he or she is treated unfairly will likely not be satisfied, committed, or even a productive worker. The notion of justice (and it's components or types) has been studied by psychologists since at least the 1960s. However, I found recently that the ideas underlying our notions of justice are much older. Aristotle wrote about distributive justice in his <em>Nicomachean Ethics</em>:</p>
<blockquote>
<p>Of particular justice and that which is just in the corresponding sense, (A) one kind is that which is manifested in distributions of honour or money..." (Chapter 2, Book 5).</p>
</blockquote>
<p>His notion of distributive justice is startlingly similar to the contemporary conceptualization used in psychological research. Essentially, people get angry when their outcomes are not equitable. Following from Adams's Equity Theory, people want their output (e.g., pay) to be commensurate with their input (e.g., effort). If these are out of of balance, the person will perceive distributive injustice. This is a clarity of the ancient Roman notion of justice from the Justinian code: "...the constant and permanent will to render to each person what is his right." This is vague from a legal and philosophical stand point, but cast in the light of individual perception, equity theory can be applied. A person's due (and their right to it) stems from their idea of what they think is an appropriate outcome given what they put in. This seems troublesome given an reliance on individual perception, but psychologists study the individual and the relationships amongst their thoughts, feelings, and behaviors. The objective fairness is immaterial when a person feels slighted, the guy reaction of "That's not fair!" kicks in; that impulse is precisely what leads to the outcomes that I discussed earlier.</p>
<p>These ideas are not new, per se, but I feel as though there is progress being made. Aristotle did not take about procedural or interactional justice; these are more contemporary discoveries.</p>
<p>More musings to follow on justice as I work through Rapheal's "Concepts of Justice," and various book chapters and articles from the psychological literature.</p>]]></content:encoded>
            <author>Zach</author>
            <category>Psychology</category>
        </item>
    </channel>
</rss>