<?xml version="1.0" encoding="UTF-8"?>
<article xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="1.1" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id>authorea</journal-id>
      <publisher>
        <publisher-name>Authorea</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.36227/techrxiv.170905722.27435902/v2</article-id>
      <title-group>
        <article-title>Attention-aware Semantic Communications for Collaborative Inference</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>Im</surname>
            <given-names>Jiwoong</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>Kwon</surname>
            <given-names>Nayoung</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>Park</surname>
            <given-names>Taewoo</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>Woo</surname>
            <given-names>Jiheon</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name>
            <surname>Lee</surname>
            <given-names>Jaeho</given-names>
          </name>
        </contrib>
        <contrib contrib-type="author" corresp="yes">
          <contrib-id contrib-id-type="orcid">0000-0003-0120-3750</contrib-id>
          <name>
            <surname>Kim</surname>
            <given-names>Yongjune</given-names>
          </name>
        </contrib>
      </contrib-group>
      <pub-date date-type="preprint" publication-format="electronic">
        <day>4</day>
        <month>3</month>
        <year>2024</year>
      </pub-date>
      <self-uri xlink:href="https://doi.org/10.36227/techrxiv.170905722.27435902/v2">This preprint is available at https://doi.org/10.36227/techrxiv.170905722.27435902/v2</self-uri>
      <abstract abstract-type="abstract">
        <p>We propose a communication-efficient collaborative inference framework
in the domain of edge inference, focusing on the efficient use of vision
transformer (ViTs) models. The partitioning strategy of conventional
collaborative inference fails to reduce communication cost because of
the inherent architecture of ViTs maintaining consistent layer
dimensions across the entire transformer encoder. Therefore, instead of
employing the partitioning strategy, our framework utilizes a
lightweight ViT model on the edge device, with the server deploying a
complicated ViT model. To enhance communication efficiency and achieve
the classification accuracy of the server model, we propose two
strategies: 1) attention-aware patch selection and 2) entropy-aware
image transmission. Attention-aware patch selection leverages the
attention scores generated by the edge device’s transformer encoder to
identify and select the image patches critical for classification. This
strategy enables the edge device to transmit only the essential patches
to the server, significantly improving communication efficiency.
Entropy-aware image transmission uses min-entropy as a metric to
accurately determine whether to depend on the lightweight model on the
edge device or to request the inference from the server model. In our
framework, the lightweight ViT model on the edge device acts as a
semantic encoder, efficiently identifying and selecting the crucial
image information required for the classification task. Our experiments
demonstrate that the proposed collaborative inference framework can
reduce communication overhead by 68 % with only a minimal loss in
accuracy compared to the server model.</p>
      </abstract>
      <kwd-group kwd-group-type="author-created">
        <kwd>Internet of things (IoT)</kwd>
        <kwd>collaborative inference</kwd>
        <kwd>communication, networking and broadcast technologies</kwd>
        <kwd>computing and processing</kwd>
        <kwd>edge computing</kwd>
        <kwd>edge inference</kwd>
        <kwd>semantic communications</kwd>
        <kwd>split inference</kwd>
        <kwd>vision transformer</kwd>
      </kwd-group>
    </article-meta>
  </front>
</article>
