Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, and is published monthly online by MDPI. The International Society for Information Studies (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
3.1 (2022);
5-Year Impact Factor:
2.9 (2022)
Latest Articles
Metaverse Applications in Bioinformatics: A Machine Learning Framework for the Discrimination of Anti-Cancer Peptides
Information 2024, 15(1), 48; https://doi.org/10.3390/info15010048 - 15 Jan 2024
Abstract
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of
[...] Read more.
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of bioinformatics and genomic data, facilitate groundbreaking developments. The precise prediction of ACPs from complex biological sequences remains an ongoing challenge in the genomic area. Currently, conventional approaches such as chemotherapy, target therapy, radiotherapy, and surgery are widely used for cancer treatment. However, these methods fail to completely eradicate neoplastic cells or cancer stem cells and damage healthy tissues, resulting in morbidity and even mortality. To control such diseases, oncologists and drug designers highly desire to develop new preventive techniques with more efficiency and minor side effects. Therefore, this research provides an optimized computational-based framework for discriminating against ACPs. In addition, the proposed approach intelligently integrates four peptide encoding methods, namely amino acid occurrence analysis (AAOA), dipeptide occurrence analysis (DOA), tripeptide occurrence analysis (TOA), and enhanced pseudo amino acid composition (EPseAAC). To overcome the issue of bias and reduce true error, the synthetic minority oversampling technique (SMOTE) is applied to balance the samples against each class. The empirical results over two datasets, where the accuracy of the proposed model on the benchmark dataset is 97.56% and on the independent dataset is 95.00%, verify the effectiveness of our ensemble learning mechanism and show remarkable performance when compared with state-of-the-art (SOTA) methods. In addition, the application of metaverse technology in healthcare holds promise for transformative innovations, potentially enhancing patient experiences and providing novel solutions in the realm of preventive techniques and patient care.
Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
►
Show Figures
Open AccessArticle
Identifying Smartphone Users Based on Activities in Daily Living Using Deep Neural Networks
Information 2024, 15(1), 47; https://doi.org/10.3390/info15010047 - 15 Jan 2024
Abstract
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and
[...] Read more.
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and personal information due to the convenience and accessibility. However, losing control of this data poses risks if the phone gets lost or stolen. While passwords, PINs, and pattern locks are common security methods, they can still be compromised through exploits like smudging residue from touching the screen. This research explored leveraging smartphone sensors to authenticate users based on behavioral patterns when operating the device. The proposed technique uses a deep learning model called DeepResNeXt, a type of deep residual network, to accurately identify smartphone owners through sensor data efficiently. Publicly available smartphone datasets were used to train the suggested model and other state-of-the-art networks to conduct user recognition. Multiple experiments validated the effectiveness of this framework, surpassing previous benchmark models in this area with a top F1-score of 98.96%.
Full article
(This article belongs to the Special Issue Advanced Technologies in Intelligent Detection of Biological Information)
►▼
Show Figures
Figure 1
Open AccessArticle
A Holistic Approach to Ransomware Classification: Leveraging Static and Dynamic Analysis with Visualization
Information 2024, 15(1), 46; https://doi.org/10.3390/info15010046 - 14 Jan 2024
Abstract
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world.
[...] Read more.
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world. In this paper, we propose a comprehensive ransomware classification approach based on the comparison of similarity matrices derived from static, dynamic analysis, and visualization. Our approach involves the use of multiple analysis techniques to extract features from ransomware samples and to generate similarity matrices based on these features. These matrices are then compared using a variety of comparison algorithms to identify similarities and differences between the samples. The resulting similarity scores are then used to classify the samples into different categories, such as families, variants, and versions. We evaluate our approach using a dataset of ransomware samples and demonstrate that it can accurately classify the samples with a high degree of accuracy. One advantage of our approach is the use of visualization, which allows us to classify and cluster large datasets of ransomware in a more intuitive and effective way. In addition, static analysis has the advantage of being fast and accurate, while dynamic analysis allows us to classify and cluster packed ransomware samples. We also compare our approach to other classification approaches based on single analysis techniques and show that our approach outperforms these approaches in terms of classification accuracy. Overall, our study demonstrates the potential of using a comprehensive approach based on the comparison of multiple analysis techniques, including static analysis, dynamic analysis, and visualization, for the accurate and efficient classification of ransomware. It also highlights the importance of considering multiple analysis techniques in the development of effective ransomware classification methods, especially when dealing with large datasets and packed samples.
Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
►▼
Show Figures
Figure 1
Open AccessArticle
ABAC Policy Mining through Affiliation Networks and Biclique Analysis
Information 2024, 15(1), 45; https://doi.org/10.3390/info15010045 - 12 Jan 2024
Abstract
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known
[...] Read more.
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known as the access control policy, to restrict access in information systems by means of logical expressions defined through the attribute–values of three types of entities: users, resources, and environmental conditions. The application of policy mining in large-scale systems oriented towards ABAC is a must because it is not workable to create rules by hand when the system requires the management of thousands of users and resources. In the literature on ABAC policy mining, current solutions follow a frequency-based strategy to extract rules; the problem with that approach is that selecting a high-frequency support leaves many resources without rules (especially those with few requesters), and a low support leads to the rule explosion of unreliable rules. Another challenge is the difficulty of collecting a set of test examples for correctness evaluation, since the classes of user–resource pairs available in logs are imbalanced. Moreover, alternative evaluation criteria for correctness, such as peculiarity and diversity, have not been explored for ABAC policy mining. To address these challenges, we propose the modeling of access logs as affiliation networks for applying network and biclique analysis techniques (1) to extract ABAC rules supported by graph patterns without a frequency threshold, (2) to generate synthetic examples for correctness evaluation, and (3) to create alternative evaluation measures to correctness. We discovered that the rules extracted through our strategy can cover more resources than the frequency-based strategy and perform this without rule explosion; moreover, our synthetics are useful for increasing the certainty level of correctness results. Finally, our alternative measures offer a wider evaluation profile for policy mining.
Full article
(This article belongs to the Special Issue Complex Network Analysis in Security)
►▼
Show Figures
Figure 1
Open AccessArticle
Radar-Based Invisible Biometric Authentication
Information 2024, 15(1), 44; https://doi.org/10.3390/info15010044 - 12 Jan 2024
Abstract
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards
[...] Read more.
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards alternative methods, such as the BR. In this work, a BR dataset with 20 subjects exposed to different emotion-eliciting stimuli (happiness, fearfulness, and neutrality) in different dates was explored. The spectral distributions of the BR signal were studied as the biometric template. Furthermore, this study included the analysis of respiratory and cardiac signals separately, as well as their fusion. The main test devised was authentication, where a system seeks to validate an individual’s claimed identity. With this test, it was possible to infer the feasibility of these type of systems, obtaining an Equal Error Rate (EER) of if the training and testing data are from the same day and within the same emotional stimuli. In addition, the time and emotion results dependency is fully analysed. Complementary tests such as sensitivity to the number of users were also performed. Overall, it was possible to achieve an evaluation and consideration of the potential of BR systems for biometrics.
Full article
(This article belongs to the Special Issue Application of Machine Learning and Deep Learning in Pattern Recognition and Biometrics)
►▼
Show Figures
Figure 1
Open AccessArticle
A Traceable Universal Designated Verifier Transitive Signature Scheme
Information 2024, 15(1), 43; https://doi.org/10.3390/info15010043 - 12 Jan 2024
Abstract
A transitive signature scheme enables anyone to obtain the signature on edge by combining the signatures on edges and , but it suffers from signature theft and signature
[...] Read more.
A transitive signature scheme enables anyone to obtain the signature on edge by combining the signatures on edges and , but it suffers from signature theft and signature abuse. The existing work has solved these problems using a universal designated verifier transitive signature (UDVTS). However, the UDVTS scheme only enables the designated verifier to authenticate signatures, which provides a simple way for the signer to deny having signed some messages. The fact that the UDVTS is not publicly verifiable prevents the verifier from seeking help arbitrating the source of signatures. Based on this problem, this paper proposes a traceable universal designated verifier transitive signature (TUDVTS) and its security model. We introduce a tracer into the system who will trace the signature back to its true source after the verifier has submitted an application for arbitration. To show the feasibility of our primitive, we construct a concrete scheme from a bilinear group pair of prime order and prove that the scheme satisfies unforgeability, privacy, and traceability.
Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Security and Privacy")
►▼
Show Figures
Figure 1
Open AccessArticle
Simulation-Enhanced MQAM Modulation Identification in Communication Systems: A Subtractive Clustering-Based PSO-FCM Algorithm Study
Information 2024, 15(1), 42; https://doi.org/10.3390/info15010042 - 12 Jan 2024
Abstract
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates
[...] Read more.
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates particle swarm optimization (PSO) to improve the accuracy of recognizing M-ary quadrature amplitude modulation (MQAM) signal orders. The process is a two-step clustering process. First, the constellation diagram of the received signal is used by a subtractive clustering algorithm based on SNR to figure out the initial number of clustering centers. The PSO-FCM algorithm then refines these centers to improve precision. Accurate signal classification and identification are achieved by evaluating the relative sizes of the radii around the cluster centers within the MQAM constellation diagram and determining the modulation order. The results indicate that the SC-based PSO-FCM algorithm outperforms the conventional FCM in clustering effectiveness, notably enhancing modulation recognition rates in low-SNR conditions, when evaluated against a variety of QAM signals ranging from 4QAM to 64QAM.
Full article
(This article belongs to the Topic Fuzzy Number, Fuzzy Difference, Fuzzy Differential: Theory and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Public Health Implications for Effective Community Interventions Based on Hospital Patient Data Analysis Using Deep Learning Technology in Indonesia
by
, , , , and
Information 2024, 15(1), 41; https://doi.org/10.3390/info15010041 - 11 Jan 2024
Abstract
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of
[...] Read more.
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of the health problems faced by a population, including disease prevalence, risk factors, and other determinants of health. This work aims to explore the potential of hospital patient data analysis as a valuable tool for understanding community implications and deriving insights for effective community health interventions. The study recognises the significance of harnessing the vast amount of data generated within hospital settings to inform population-level health strategies. The methodology employed in this study involves the collection and analysis of deidentified patient data from a representative sample of a hospital in Indonesia. Various data analysis techniques, such as statistical modelling, data mining, and machine learning algorithms, are utilised to identify patterns, trends, and associations within the data. A program written in Python is used to analyse patient data in a hospital for five years, from 2018 to 2022. These findings are then interpreted within the context of public health implications, considering factors such as disease prevalence, socioeconomic determinants, and healthcare utilisation patterns. The results of the data analysis provide valuable insights into the public health implications of hospital patient data. The research also covers predictions for the patient data to the hospital based on disease, age, and geographical residence. The research prediction shows that, in the year 2023, the number of patients will not be considerably affected by the infection, but in March to April 2024 the number will increase significantly up to 10,000 patients due to the trend in the previous year at the end of 2022. These recommendations encompass targeted prevention strategies, improved healthcare delivery models, and community engagement initiatives. The research emphasises the importance of collaboration between healthcare providers, policymakers, and community stakeholders in implementing and evaluating these interventions.
Full article
(This article belongs to the Special Issue Advances in AI for Health and Medical Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Secure Genomic String Search with Parallel Homomorphic Encryption
Information 2024, 15(1), 40; https://doi.org/10.3390/info15010040 - 11 Jan 2024
Abstract
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their
[...] Read more.
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their broader application. This project aims to enhance FHE systems through an efficient parallel framework, specifically building upon the existing torus FHE (TFHE) system chillotti2016faster. The TFHE system was chosen for its superior bootstrapping computations and precise results for countless Boolean gate evaluations, such as AND and XOR. Our first approach was to expand upon the gate operations within the current system, shifting towards algebraic circuits, and using graphics processing units (GPUs) to manage cryptographic operations in parallel. Then, we implemented this GPU-parallel FHE framework into a needed genomic data operation, specifically string search. We utilized popular string distance metrics (hamming distance, edit distance, set maximal matches) to ascertain the disparities between multiple genomic sequences in a secure context with all data and operations occurring under encryption. Our experimental data revealed that our GPU implementation vastly outperforms the former method, providing a 20-fold speedup for any 32-bit Boolean operation and a 14.5-fold increase for multiplications.This paper introduces unique enhancements to existing FHE cryptographic systems using GPUs and additional algorithms to quicken fundamental computations. Looking ahead, the presented framework can be further developed to accommodate more complex, real-world applications.
Full article
(This article belongs to the Special Issue Digital Privacy and Security)
►▼
Show Figures
Figure 1
Open AccessArticle
Time Series Forecasting Utilizing Automated Machine Learning (AutoML): A Comparative Analysis Study on Diverse Datasets
by
, , , , and
Information 2024, 15(1), 39; https://doi.org/10.3390/info15010039 - 11 Jan 2024
Abstract
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into
[...] Read more.
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into these tools in the context of time series analysis, which is essential for forecasting future trends from historical data. We evaluate three prominent AutoML tools—AutoGluon, Auto-Sklearn, and PyCaret—across various metrics, employing diverse datasets that include Bitcoin and COVID-19 data. The results reveal that the performance of each tool is highly dependent on the specific dataset and its ability to manage the complexities of time series data. This thorough investigation not only demonstrates the strengths and limitations of each AutoML tool but also highlights the criticality of dataset-specific considerations in time series analysis. Offering valuable insights for both practitioners and researchers, this study emphasizes the ongoing need for research and development in this specialized area. It aims to serve as a reference for organizations dealing with time series datasets and a guiding framework for future academic research in enhancing the application of AutoML tools for time series forecasting and analysis.
Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
►▼
Show Figures
Figure 1
Open AccessReview
Unmanned Autonomous Intelligent System in 6G Non-Terrestrial Network
Information 2024, 15(1), 38; https://doi.org/10.3390/info15010038 - 11 Jan 2024
Abstract
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia
[...] Read more.
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia and industry. These systems have various applications in autonomous driving, logistics, area surveillance, and medical services. With the rapid evolution of information and communication technology (ICT), 5G and beyond-5G communication have enabled numerous intelligent applications through the comprehensive utilization of advanced NTN communication technology and artificial intelligence. To meet the demands of complex tasks in remote or communication-challenged areas, there is an urgent need for reliable, ultra-low latency communication networks to enable unmanned autonomous intelligent systems for applications such as localization, navigation, perception, decision-making, and motion planning. However, in remote areas, reliable communication coverage is not available, which poses a significant challenge for intelligent systems applications. The rapid development of non-terrestrial networks (NTNs) communication has shed new light on intelligent applications that require ubiquitous network connections in space, air, ground, and sea. However, challenges arise when using NTN technology in unmanned autonomous intelligent systems. Our research examines the advancements and obstacles in academic research and industry applications of NTN technology concerning UAIS, which is supported by unmanned aerial vehicles (UAV) and other low-altitude platforms. Nevertheless, edge computing and cloud computing are crucial for unmanned autonomous intelligent systems, which also necessitate distributed computation architectures for computationally intensive tasks and massive data offloading. This paper presents a comprehensive analysis of the opportunities and challenges of unmanned autonomous intelligent systems in UAV NTN, along with NTN-based unmanned autonomous intelligent systems and their applications. A field trial case study is presented to demonstrate the application of NTN in UAIS.
Full article
(This article belongs to the Special Issue Big Data Analytics and Mobile Cloud Computing in the Internet of Things)
►▼
Show Figures
Figure 1
Open AccessReview
Parametric and Nonparametric Machine Learning Techniques for Increasing Power System Reliability: A Review
Information 2024, 15(1), 37; https://doi.org/10.3390/info15010037 - 11 Jan 2024
Abstract
►▼
Show Figures
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods
[...] Read more.
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods to minimize downtime in power networks and reduce maintenance costs. In addition to traditional statistical methods, modern technologies such as machine learning have become increasingly common for enhancing system reliability and customer satisfaction. The primary objective of this study is to review parametric and nonparametric machine learning techniques and their applications in relation to maintenance-related aspects of power distribution system assets, including (1) distribution lines, (2) transformers, and (3) insulators. Compared to other reviews, this study offers a unique perspective on machine learning algorithms and their predictive capabilities in relation to the critical components of power distribution systems.
Full article
Figure 1
Open AccessArticle
Rapid Forecasting of Cyber Events Using Machine Learning-Enabled Features
Information 2024, 15(1), 36; https://doi.org/10.3390/info15010036 - 11 Jan 2024
Abstract
In recent years, there has been a notable surge in both the complexity and volume of targeted cyber attacks, largely due to heightened vulnerabilities in widely adopted technologies. The Prediction and detection of early attacks are vital to mitigating potential risks from cyber
[...] Read more.
In recent years, there has been a notable surge in both the complexity and volume of targeted cyber attacks, largely due to heightened vulnerabilities in widely adopted technologies. The Prediction and detection of early attacks are vital to mitigating potential risks from cyber attacks and network resilience. With the rapid increase of digital data and the increasing complexity of cyber attacks, big data has become a crucial tool for intrusion detection and forecasting. By leveraging the capabilities of unstructured big data, intrusion detection and forecasting systems can become more effective in detecting and preventing cyber attacks and anomalies. While some progress has been made on attack prediction, little attention has been given to forecasting cyber events based on time series and unstructured big data. In this research, we used the CSE-CIC-IDS2018 dataset, a comprehensive dataset containing several attacks on a realistic network. Then we used time-series forecasting techniques to construct time-series models with tuned parameters to assess the effectiveness of these techniques, which include Sequential Minimal Optimisation for regression (SMOreg), linear regression and Long Short-Term Memory (LSTM) to forecast the cyber events. We used machine learning algorithms such as Naive Bayes and random forest to evaluate the performance of the models. The best performance results of 90.4% were achieved with Support Vector Machine (SVM) and random forest. Additionally, Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) metrics were used to evaluate forecasted event performance. SMOreg’s forecasted events yielded the lowest MAE, while those from linear regression exhibited the lowest RMSE. This work is anticipated to contribute to effective cyber threat detection, aiming to reduce security breaches within critical infrastructure.
Full article
(This article belongs to the Special Issue Emerging Research on Neural Networks and Anomaly Detection)
►▼
Show Figures
Figure 1
Open AccessArticle
Engineering Four-Qubit Fuel States for Protecting Quantum Thermalization Machine from Decoherence
by
, , , , and
Information 2024, 15(1), 35; https://doi.org/10.3390/info15010035 - 10 Jan 2024
Abstract
Decoherence is a major issue in quantum information processing, degrading the performance of tasks or even precluding them. Quantum error-correcting codes, creating decoherence-free subspaces, and the quantum Zeno effect are among the major means for protecting quantum systems from decoherence. Increasing the number
[...] Read more.
Decoherence is a major issue in quantum information processing, degrading the performance of tasks or even precluding them. Quantum error-correcting codes, creating decoherence-free subspaces, and the quantum Zeno effect are among the major means for protecting quantum systems from decoherence. Increasing the number of qubits of a quantum system to be utilized in a quantum information task as a resource expands the quantum state space. This creates the opportunity to engineer the quantum state of the system in a way that improves the performance of the task and even to protect the system against decoherence. Here, we consider a quantum thermalization machine and four-qubit atomic states as its resource. Taking into account the realistic conditions such as cavity loss and atomic decoherence due to ambient temperature, we design a quantum state for the atomic resource as a classical mixture of Dicke and W states. We show that using the mixture probability as the control parameter, the negative effects of the inevitable decoherence on the machine performance almost vanish. Our work paves the way for optimizing resource systems consisting of a higher number of atoms.
Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Streamlining Temporal Formal Verification over Columnar Databases
Information 2024, 15(1), 34; https://doi.org/10.3390/info15010034 - 08 Jan 2024
Abstract
Recent findings demonstrate how database technology enhances the computation of formal verification tasks expressible in linear time logic for finite traces (LTLf). Human-readable declarative languages also help the common practitioner to express temporal constraints in a straightforward and accessible language. Notwithstanding
[...] Read more.
Recent findings demonstrate how database technology enhances the computation of formal verification tasks expressible in linear time logic for finite traces (LTLf). Human-readable declarative languages also help the common practitioner to express temporal constraints in a straightforward and accessible language. Notwithstanding the former, this technology is in its infancy, and therefore, few optimization algorithms are known for dealing with massive amounts of information audited from real systems. We, therefore, present four novel algorithms subsuming entire LTLf expressions while outperforming previous state-of-the-art implementations on top of KnoBAB, thus postulating the need for the corresponding, leading to the formulation of novel xtLTLf-derived algebraic operators.
Full article
(This article belongs to the Special Issue International Database Engineered Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Higher Education Students’ Task Motivation in the Generative Artificial Intelligence Context: The Case of ChatGPT
Information 2024, 15(1), 33; https://doi.org/10.3390/info15010033 - 08 Jan 2024
Abstract
Artificial intelligence has been attracting the attention of educational researchers recently, especially ChatGPT as a generative artificial intelligence tool. The context of generative artificial intelligence could impact different aspects of students’ learning, such as the motivational aspect. The present research intended to investigate
[...] Read more.
Artificial intelligence has been attracting the attention of educational researchers recently, especially ChatGPT as a generative artificial intelligence tool. The context of generative artificial intelligence could impact different aspects of students’ learning, such as the motivational aspect. The present research intended to investigate the characteristics of students’ task motivation in the artificial intelligence context, specifically in the ChatGPT context. The researchers interviewed 15 students about their experiences with ChatGPT to collect data. The researchers used inductive and deductive content analysis to investigate students’ motivation when learning with ChatGPT. To arrive at the categories and sub-categories of students’ motivation, the researchers used the MAXQDA 2022. Five main categories emerged: task enjoyment, reported effort, result assessment, perceived relevance, and interaction. Each category comprised at least two sub-categories, and each sub-category was further organized into codes. The results indicated more positive characteristics of motivation than negative ones. The previous results could be due to the conversational or social aspect of the chatbot, enabling relationships with humans and enabling the maintenance of good quality conversations with them. We conclude that a generative AI could be utilized in educational settings to promote students’ motivation to learn and thus raise their learning achievement.
Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
Open AccessArticle
Efficient Revocable Attribute-Based Encryption with Data Integrity and Key Escrow-Free
Information 2024, 15(1), 32; https://doi.org/10.3390/info15010032 - 07 Jan 2024
Abstract
Revocable attribute-based encryption (RABE) provides greater flexibility and fine-grained access control for data sharing. However, the revocation process for most RABE schemes today is performed by the cloud storage provider (CSP). Since the CSP is an honest and curious third party, there is
[...] Read more.
Revocable attribute-based encryption (RABE) provides greater flexibility and fine-grained access control for data sharing. However, the revocation process for most RABE schemes today is performed by the cloud storage provider (CSP). Since the CSP is an honest and curious third party, there is no guarantee that the plaintext data corresponding to the new ciphertext after revocation is the same as the original plaintext data. In addition, most attribute-based encryption schemes suffer from issues related to key escrow. To overcome the aforementioned issues, we present an efficient RABE scheme that supports data integrity while also addressing the key escrow issue. We demonstrate the security for our system, which is reduced to the decisional q-parallel bilinear Diffie-Hellman exponent (q-PBDHE) assumption and discrete logarithm (DL) assumption. The performance analysis illustrates that our scheme is efficient.
Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Security and Privacy")
►▼
Show Figures
Figure 1
Open AccessArticle
Predicting an Optimal Medication/Prescription Regimen for Patient Discordant Chronic Comorbidities Using Multi-Output Models
Information 2024, 15(1), 31; https://doi.org/10.3390/info15010031 - 05 Jan 2024
Abstract
This paper focuses on addressing the complex healthcare needs of patients struggling with discordant chronic comorbidities (DCCs). Managing these patients within the current healthcare system often proves to be a challenging process, characterized by evolving treatment needs necessitating multiple medical appointments and coordination
[...] Read more.
This paper focuses on addressing the complex healthcare needs of patients struggling with discordant chronic comorbidities (DCCs). Managing these patients within the current healthcare system often proves to be a challenging process, characterized by evolving treatment needs necessitating multiple medical appointments and coordination among different clinical specialists. This makes it difficult for both patients and healthcare providers to set and prioritize medications and understand potential drug interactions. The primary motivation of this research is the need to reduce medication conflict and optimize medication regimens for individuals with DCCs. To achieve this, we allowed patients to specify their health conditions and primary and major treatment concerns, for example, costs of medication, interactions with current drugs, and weight gain. Utilizing data gathered from MTurk and Qualtrics, we gained insights into healthcare providers’ strategies for making/customizing medication regimens. We constructed a dataset and subsequently deployed machine learning algorithms to predict optimal medication regimens for DCC patients with specific treatment concerns. Following the benchmarking different models, Random forest emerged as the top performer, achieving an accuracy of 0.93. This research contributes significantly to the enhancement of decision-making processes, empowers patients to take a more active role in their healthcare, and promotes more informed and productive discussions between patients and their care teams.
Full article
(This article belongs to the Section Information Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
IoT-Assisted Automatic Driver Drowsiness Detection through Facial Movement Analysis Using Deep Learning and a U-Net-Based Architecture
Information 2024, 15(1), 30; https://doi.org/10.3390/info15010030 - 02 Jan 2024
Abstract
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching
[...] Read more.
The main purpose of a detection system is to ascertain the state of an individual’s eyes, whether they are open and alert or closed, and then alert them to their level of fatigue. As a result of this, they will refrain from approaching an accident site. In addition, it would be advantageous for people to be promptly alerted in real time before the occurrence of any calamitous events affecting multiple people. The implementation of Internet-of-Things (IoT) technology in driver action recognition has become imperative due to the ongoing advancements in Artificial Intelligence (AI) and deep learning (DL) within Advanced Driver Assistance Systems (ADAS), which are significantly transforming the driving encounter. This work presents a deep learning model that utilizes a CNN–Long Short-Term Memory network to detect driver sleepiness. We employ different algorithms on datasets such as EM-CNN, VGG-16, GoogLeNet, AlexNet, ResNet50, and CNN-LSTM. The aforementioned algorithms are used for classification, and it is evident that the CNN-LSTM algorithm exhibits superior accuracy compared to alternative deep learning algorithms. The model is provided with video clips of a certain period, and it distinguishes the clip by analyzing the sequence of motions exhibited by the driver in the video. The key objective of this work is to promote road safety by notifying drivers when they exhibit signs of drowsiness, minimizing the probability of accidents caused by fatigue-related disorders. It would help in developing an ADAS that is capable of detecting and addressing driver tiredness proactively. This work intends to limit the potential dangers associated with drowsy driving, hence promoting enhanced road safety and a decrease in accidents caused by fatigue-related variables. This work aims to achieve high efficacy while maintaining a non-intrusive nature. This work endeavors to offer a non-intrusive solution that may be seamlessly integrated into current automobiles, hence enhancing accessibility to a broader spectrum of drivers through the utilization of facial movement analysis employing CNN-LSTM and a U-Net-based architecture.
Full article
(This article belongs to the Special Issue Blending Artificial Intelligence and Machine Learning with the Internet of Things: Emerging Trends, Issues and Challenges)
►▼
Show Figures
Figure 1
Open AccessEditorial
Editorial for the Special Issue “Information Technologies in Education, Research, and Innovation”
by
and
Information 2024, 15(1), 29; https://doi.org/10.3390/info15010029 - 02 Jan 2024
Abstract
The COVID-19 pandemic has accelerated the integration of education and technology, ushering in a new era where digital tools and innovative approaches take center stage across higher education and beyond [...]
Full article
(This article belongs to the Special Issue Information Technologies in Education, Research and Innovation)
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Administrative Sciences, Future Internet, Information, Smart Cities, Social Sciences, Technologies, Urban Science
From ChatGPT to GovGPT: The Future of Digital Government
Topic Editors: Liang Ma, Yueping Zheng, Ziteng FanDeadline: 31 January 2024
Topic in
Computers, Energies, Future Internet, Information, Mathematics
Research on Blockchain Technology for Peer-to-Peer (P2P) Energy Trading
Topic Editors: Pierre-Martin Tardif, Brahim El Bhiri, Bilal Abu-Salih, Kenji TanakaDeadline: 29 February 2024
Topic in
Data, Future Internet, Information, Mathematics, Symmetry
Application of Deep Learning Method in 6G Communication Technology
Topic Editors: Mohamed Abouhawwash, K. VenkatachalamDeadline: 31 March 2024
Topic in
AI, Algorithms, BDCC, Future Internet, Informatics, Information, Languages, Publications
AI Chatbots: Threat or Opportunity?
Topic Editors: Antony Bryant, Roberto Montemanni, Min Chen, Paolo Bellavista, Kenji Suzuki, Jeanine Treffers-DallerDeadline: 30 April 2024
Conferences
Special Issues
Special Issue in
Information
Advances in Cybersecurity and Reliability
Guest Editors: Ammar Alazab, Moutaz AlazabDeadline: 20 January 2024
Special Issue in
Information
Multidimensional Data Structures and Big Data Management
Guest Editor: Spyros SioutasDeadline: 31 January 2024
Special Issue in
Information
Telematics, GIS and Artificial Intelligence
Guest Editors: Roberto Zagal Flores, Miguel Félix Mata RiveraDeadline: 15 February 2024
Special Issue in
Information
Artificial Intelligence and Games Science in Education
Guest Editors: Petros Lameras, Sylvester Arnab, Panagiotis PetridisDeadline: 29 February 2024
Topical Collections
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Annalina Caputo, Pierpaolo Basile
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Jorge Bacca-Acosta, N.D. Duque-Mendez, Ramon Fabregat