<?xml version="1.0" encoding="UTF-8"?>
<article xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.1" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id>authorea</journal-id>
      <publisher>
        <publisher-name>Authorea</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.15200/winn.155318.83442</article-id>
      <title-group>
        <article-title>Hi, I&amp;#x2019;m Alan Smith, Data visualisation editor at the Financial Times.
I&amp;#x2019;ve just finished an experimental project at the FT to both visualise
and sonify the historical yield curve - a large dataset of over 100,000
data points. AMA!</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <name>
            <surname>financialtimes</surname>
            <given-names/>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>AMAs</surname>
            <given-names>r/Science</given-names>
          </name>
        </contrib>
      </contrib-group>
      <pub-date date-type="preprint" publication-format="electronic">
        <day>17</day>
        <month>4</month>
        <year>2023</year>
      </pub-date>
      <self-uri xlink:href="https://doi.org/10.15200/winn.155318.83442">This preprint is available at https://doi.org/10.15200/winn.155318.83442</self-uri>
      <abstract abstract-type="abstract">
        <p>Hi, I’m Alan Smith, Data visualisation editor at the Financial Times.
I’ve just finished an experimental project at the FT to both visualise
and sonify the historical yield curve - a large dataset of over 100,000
data points. I’ve filmed a step-by-step walkthrough of the project. And
the end product, a combined animated data visualisation and sonification
of four decades of the US yield curve, is available on YouTube
https://www.youtube.com/watch?v=GoQBWcNw6IU . My full article is on the
FT, website: ft.com/music-from-data My work has also coincided with the
the release of a new open source tool funded by Google* that allows
users to make music from spreadsheets. So - is data sonification ready
to be the next big thing in data presentation? Can it bring data to new
audiences such as including the blind/visually impaired, podcast
listeners, and those accessing the web via screenless devices with voice
interfaces. Or is it a simple novelty? Ask me anything! TwoTone app
funded by Google (https://app.twotone.io/) Proof:
https://i.redd.it/pmafgrjd94n21.jpg</p>
      </abstract>
    </article-meta>
  </front>
</article>
