IntroductionThe time spent by students using digital devices for learning is increasing rapidly with the development of new, portable and instantly accessible technology, such as smartphones and digital tablets. Furthermore, during the pandemic, online learning has dramatically increased. This explosion in e-Learning which is traditionally different from pen and paper has brought a difficult situation because screen and paper have different natures. Although the screen has some advantages over paper, the screen glows and emits blue light to show the content, while paper doesn’t. The emission of bright lights, especially blue light during night from a screen has many negative consequences for a user in the long term. [1] While studying online, students have to view the screen for a longer period of time. The brightness of the screen can be a bottleneck for students to study for a longer period of time. So, to reduce the emission of blue light, dark mode is a simple but a great option to have. [2] Implementing dark mode is not just about making the background darker, but also about balancing the color combination, contrast between background and items such as icons, text, images and many more UI elements. [3] Dark mode provides a great relief on minimizing the dry and painful eyes syndrome that are caused due to studying on a bright blue screen all night long. Bright lights also affect secretion of melatonin, a hormone needed for sleep. [4] Besides the health benefits, dark mode on various applications helps to enhance battery life. Using dark mode on an OLED screen can extend battery life in noticeable figures. [5] The dark mode created is analyzed using principles of HCI. Human Computer Interaction (HCI) is understood as the design, evaluation and implementation of interactive computing systems for human use. HCI is a key part of computer science, psychology, and design because designing a successful interface requires an understanding of how people interact with devices, user interactive graphical and collaborative platforms. Dark mode is a new trend in UI/UX design. Almost all Silicon Valley companies are adopting this trend. [6] Today, almost all the major websites support dark mode. Even the Operating Systems are adopting dark mode, including Windows, IOS and Android, in their latest update. Facebook, Twitter, YouTube, Reddit etc. are few big companies that have already included dark mode in their products. Dark mode even though a simple concept has significant advantages to other interface designs, which is why the trend is still growing, and rapidly being adopted by companies worldwide [7]. Tested in an exploratory project involving apartment selection, a human-computer interface was created in a study by Archer et al. [8] to investigate user preferences and the efficacy of output styles and levels of information abstraction in a decision-making environment. The study discovered that voice alone was not as effective for information search as text plus voice. Nevertheless, there were no appreciable differences in preferences between text and voice or text and text plus voice, and task performance was more influenced by cognitive style than by task domain experience. The first online brain-computer interface using deep learning and a menu-driven language generating system based on referencing expressions were created in a study by Kuhner et al. [9] for a BCI-controlled autonomous robotic service assistant. To determine the efficacy of this system, it was connected with a modular ROS-based mobile robot and tested experimentally on a real robot. To investigate user preferences, the efficacy of output modalities, and the degrees of information abstraction in decision making, a human-computer interface was created in a study by Yang et al. [10]. The exploratory study on apartment selection identified disparities between perceived relevance and actual utilization of information qualities and abstraction levels, showing that text with voice was favored over voice alone and that text was the most effective for information search. Software designers utilize anthropomorphism to improve interface usability as computers get more sophisticated. In the study by Laere et al. [11], participants’ reflected assessments and self-appraisals were strongly impacted by computer input, but there was no discernible difference in the impact between the two interface types. The study examined human-like and machine-like interaction styles. Software designers utilize anthropomorphism to improve interface usability as computers get more sophisticated. In the study [12] by Mukhopadhyay et al., participants’ reflected assessments and self-appraisals were strongly impacted by computer input, but there was no discernible difference in the impact between the two interface types. The study examined human-like and machine-like interaction styles. The use of eye movements in user interfaces for human-computer interaction control and usability study was covered in the chapter by Birbaumer et al. [13]. Eye movements were recorded, analyzed for usability after the fact, and also used in real time as an input method. This demonstrated the promising potential of integrating eye-tracking in human-computer interaction, as it could be used as the only input for users with disabilities or hands-occupied applications, or it could be combined with other inputs. Jaklič et al. in the study [14] discussed video-conferencing systems that were created to enable reliable and effective communication across vast distances utilizing the Internet and personal computers. Eye contact has always been important for this kind of communication. Since cameras are usually mounted above computer monitors, making eye contact can be difficult. This article suggests a straightforward method to improve the perception of eye contact based on how 3D scenes are viewed on slanted surfaces, which is supported by experimental evidence. Despite being inexpensive, these systems have not been widely adopted. Mobile device interface and interaction design has a big impact on cognitive offloading. Compared to less responsive or mouse-based controls, participants offloaded more information while using highly responsive touch-based controls, and subjective measurements suggested that metacognitive judgments were also involved in this process as discussed in [15] by Grinschl; et al.. According to the study [16] by Zaina et al. Barriers to accessibility may be generated while developing mobile applications, frequently as a result of standard software development UI design patterns. These hurdles must be understood in the context of the challenges faced by software experts; recommendations have been put forward to prevent accessibility concerns in mobile development. Two innovative gaze-based predictive user interfaces that are able to dynamically provide adaptive interventions in line with the user’s objectives and intents linked to the task were offered. In addition to managing uncertainty in prediction model outputs, these interfaces preserve usability and do not negatively impact perceived job burden as mentioned in the study [17]. A cross-stitch technique was used to construct a wearable tactile sensor array made of fiber that was incorporated into human-machine interfaces in the study [18] by Zhong et al.. The system, which included sound feedback components, microprocessors, and sensors, allowed for an interactive process. To speed up reaction times and enhance operational behavior, a quick signal prediction technique was suggested. Many social media, websites and applications are introducing the dark mode as an additional feature. Within just a span of a couple of years, almost every popular website, app and even operating systems now have dark mode as their integral feature. The e-learning sites are also adopting this feature. But the thing that is different about the e-learning sites is that they are the replacement of books and copies. While on the other sites, it might be for many different use cases, in e-learning sites, the main purpose is to learn and teach. While this learning teaching methodology has been long practiced with the help of paper books and copies, the introduction of screens in their replacement will surely have impacts on their users and uses. The dark mode replaces the typical dark text on white background view to white text on dark background view. So, this research is the study about the impacts of dark mode on HCI in students. Our hypothesis is that, for those students who use the computer screen for longer duration for study, dark mode helps to reduce eye strain and motivates the students to study for a longer period of time in comparison to normal light mode. In this paper, we discuss and investigate this topic on an example of the MOOC system which is used in our university as an online learning platform.

Biplov Paneru

and 2 more

Significant advancements have been achieved in Brain-Machine Interface (BMI), particularly in electroencephalography (EEG)-based systems, which capture the brain’s electrical activity dynamics non-invasively. This study focuses on EEG-based BMI for detecting voluntary keystrokes, aiming to develop a reliable brain-computer interface (BCI) to simulate and anticipate keystrokes, especially for individuals with motor impairments. The dataset, featured in a Nature publication, was initially band-pass filtered and segmented into 22-electrode arrays, and our work began with excluding non-significant channels (A1, A2, and X5), and the feature-extracted dataset was utilized for developing models. Using ERP window-based segmentation, each window was categorized relative to Event 0, resulting in a 19*200 data array of features set. The ERP window-based data segmentation was done with those below Event 0 belonging to one and above Event 0 belonging to another window so that total electrode data division resulted in a total of 19*200 data arrays to begin model development. The methodology includes extensive segmentation, event alignment, ERP plot analysis, and signal analysis. Different deep learning models are trained to classify EEG data into three categories—’resting state’ (0), ‘d’ key press (1), and ‘l’ key press (2). Real-time keypress simulation based on neural activity is enabled through integration with a tkinter-based graphical user interface. Feature engineering utilized ERP windows, and the SVC model achieved 90.42% accuracy in event classification. Additionally, deep learning models—MLP (89% accuracy), Catboost (87.39% accuracy), KNN (72.59%), Gaussian Naive Bayes (79.21%), Logistic Regression (90.81% accuracy), and a novel Bi-Directional LSTM-GRU hybrid model (89% accuracy)—were developed for BCI keyboard simulation. Finally, a GUI was created to predict and simulate keystrokes using the trained MLP model.