Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,588)

Search Parameters:
Journal = Information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7459 KiB  
Article
Metaverse Applications in Bioinformatics: A Machine Learning Framework for the Discrimination of Anti-Cancer Peptides
Information 2024, 15(1), 48; https://doi.org/10.3390/info15010048 - 15 Jan 2024
Viewed by 86
Abstract
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of [...] Read more.
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of bioinformatics and genomic data, facilitate groundbreaking developments. The precise prediction of ACPs from complex biological sequences remains an ongoing challenge in the genomic area. Currently, conventional approaches such as chemotherapy, target therapy, radiotherapy, and surgery are widely used for cancer treatment. However, these methods fail to completely eradicate neoplastic cells or cancer stem cells and damage healthy tissues, resulting in morbidity and even mortality. To control such diseases, oncologists and drug designers highly desire to develop new preventive techniques with more efficiency and minor side effects. Therefore, this research provides an optimized computational-based framework for discriminating against ACPs. In addition, the proposed approach intelligently integrates four peptide encoding methods, namely amino acid occurrence analysis (AAOA), dipeptide occurrence analysis (DOA), tripeptide occurrence analysis (TOA), and enhanced pseudo amino acid composition (EPseAAC). To overcome the issue of bias and reduce true error, the synthetic minority oversampling technique (SMOTE) is applied to balance the samples against each class. The empirical results over two datasets, where the accuracy of the proposed model on the benchmark dataset is 97.56% and on the independent dataset is 95.00%, verify the effectiveness of our ensemble learning mechanism and show remarkable performance when compared with state-of-the-art (SOTA) methods. In addition, the application of metaverse technology in healthcare holds promise for transformative innovations, potentially enhancing patient experiences and providing novel solutions in the realm of preventive techniques and patient care. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

16 pages, 1807 KiB  
Article
Identifying Smartphone Users Based on Activities in Daily Living Using Deep Neural Networks
Information 2024, 15(1), 47; https://doi.org/10.3390/info15010047 - 15 Jan 2024
Viewed by 98
Abstract
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and [...] Read more.
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and personal information due to the convenience and accessibility. However, losing control of this data poses risks if the phone gets lost or stolen. While passwords, PINs, and pattern locks are common security methods, they can still be compromised through exploits like smudging residue from touching the screen. This research explored leveraging smartphone sensors to authenticate users based on behavioral patterns when operating the device. The proposed technique uses a deep learning model called DeepResNeXt, a type of deep residual network, to accurately identify smartphone owners through sensor data efficiently. Publicly available smartphone datasets were used to train the suggested model and other state-of-the-art networks to conduct user recognition. Multiple experiments validated the effectiveness of this framework, surpassing previous benchmark models in this area with a top F1-score of 98.96%. Full article
Show Figures

Figure 1

29 pages, 4963 KiB  
Article
A Holistic Approach to Ransomware Classification: Leveraging Static and Dynamic Analysis with Visualization
Information 2024, 15(1), 46; https://doi.org/10.3390/info15010046 - 14 Jan 2024
Viewed by 401
Abstract
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world. [...] Read more.
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world. In this paper, we propose a comprehensive ransomware classification approach based on the comparison of similarity matrices derived from static, dynamic analysis, and visualization. Our approach involves the use of multiple analysis techniques to extract features from ransomware samples and to generate similarity matrices based on these features. These matrices are then compared using a variety of comparison algorithms to identify similarities and differences between the samples. The resulting similarity scores are then used to classify the samples into different categories, such as families, variants, and versions. We evaluate our approach using a dataset of ransomware samples and demonstrate that it can accurately classify the samples with a high degree of accuracy. One advantage of our approach is the use of visualization, which allows us to classify and cluster large datasets of ransomware in a more intuitive and effective way. In addition, static analysis has the advantage of being fast and accurate, while dynamic analysis allows us to classify and cluster packed ransomware samples. We also compare our approach to other classification approaches based on single analysis techniques and show that our approach outperforms these approaches in terms of classification accuracy. Overall, our study demonstrates the potential of using a comprehensive approach based on the comparison of multiple analysis techniques, including static analysis, dynamic analysis, and visualization, for the accurate and efficient classification of ransomware. It also highlights the importance of considering multiple analysis techniques in the development of effective ransomware classification methods, especially when dealing with large datasets and packed samples. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

28 pages, 4717 KiB  
Article
ABAC Policy Mining through Affiliation Networks and Biclique Analysis
Information 2024, 15(1), 45; https://doi.org/10.3390/info15010045 - 12 Jan 2024
Viewed by 239
Abstract
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known [...] Read more.
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known as the access control policy, to restrict access in information systems by means of logical expressions defined through the attribute–values of three types of entities: users, resources, and environmental conditions. The application of policy mining in large-scale systems oriented towards ABAC is a must because it is not workable to create rules by hand when the system requires the management of thousands of users and resources. In the literature on ABAC policy mining, current solutions follow a frequency-based strategy to extract rules; the problem with that approach is that selecting a high-frequency support leaves many resources without rules (especially those with few requesters), and a low support leads to the rule explosion of unreliable rules. Another challenge is the difficulty of collecting a set of test examples for correctness evaluation, since the classes of user–resource pairs available in logs are imbalanced. Moreover, alternative evaluation criteria for correctness, such as peculiarity and diversity, have not been explored for ABAC policy mining. To address these challenges, we propose the modeling of access logs as affiliation networks for applying network and biclique analysis techniques (1) to extract ABAC rules supported by graph patterns without a frequency threshold, (2) to generate synthetic examples for correctness evaluation, and (3) to create alternative evaluation measures to correctness. We discovered that the rules extracted through our strategy can cover more resources than the frequency-based strategy and perform this without rule explosion; moreover, our synthetics are useful for increasing the certainty level of correctness results. Finally, our alternative measures offer a wider evaluation profile for policy mining. Full article
(This article belongs to the Special Issue Complex Network Analysis in Security)
Show Figures

Figure 1

19 pages, 833 KiB  
Article
Radar-Based Invisible Biometric Authentication
Information 2024, 15(1), 44; https://doi.org/10.3390/info15010044 - 12 Jan 2024
Viewed by 412
Abstract
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards [...] Read more.
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards alternative methods, such as the BR. In this work, a BR dataset with 20 subjects exposed to different emotion-eliciting stimuli (happiness, fearfulness, and neutrality) in different dates was explored. The spectral distributions of the BR signal were studied as the biometric template. Furthermore, this study included the analysis of respiratory and cardiac signals separately, as well as their fusion. The main test devised was authentication, where a system seeks to validate an individual’s claimed identity. With this test, it was possible to infer the feasibility of these type of systems, obtaining an Equal Error Rate (EER) of 3.48% if the training and testing data are from the same day and within the same emotional stimuli. In addition, the time and emotion results dependency is fully analysed. Complementary tests such as sensitivity to the number of users were also performed. Overall, it was possible to achieve an evaluation and consideration of the potential of BR systems for biometrics. Full article
Show Figures

Figure 1

16 pages, 275 KiB  
Article
A Traceable Universal Designated Verifier Transitive Signature Scheme
Information 2024, 15(1), 43; https://doi.org/10.3390/info15010043 - 12 Jan 2024
Viewed by 274
Abstract
A transitive signature scheme enables anyone to obtain the signature on edge (i,k) by combining the signatures on edges (i,j) and (j,k), but it suffers from signature theft and signature [...] Read more.
A transitive signature scheme enables anyone to obtain the signature on edge (i,k) by combining the signatures on edges (i,j) and (j,k), but it suffers from signature theft and signature abuse. The existing work has solved these problems using a universal designated verifier transitive signature (UDVTS). However, the UDVTS scheme only enables the designated verifier to authenticate signatures, which provides a simple way for the signer to deny having signed some messages. The fact that the UDVTS is not publicly verifiable prevents the verifier from seeking help arbitrating the source of signatures. Based on this problem, this paper proposes a traceable universal designated verifier transitive signature (TUDVTS) and its security model. We introduce a tracer into the system who will trace the signature back to its true source after the verifier has submitted an application for arbitration. To show the feasibility of our primitive, we construct a concrete scheme from a bilinear group pair (G,GT) of prime order and prove that the scheme satisfies unforgeability, privacy, and traceability. Full article
Show Figures

Figure 1

12 pages, 1442 KiB  
Article
Simulation-Enhanced MQAM Modulation Identification in Communication Systems: A Subtractive Clustering-Based PSO-FCM Algorithm Study
Information 2024, 15(1), 42; https://doi.org/10.3390/info15010042 - 12 Jan 2024
Viewed by 215
Abstract
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates [...] Read more.
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates particle swarm optimization (PSO) to improve the accuracy of recognizing M-ary quadrature amplitude modulation (MQAM) signal orders. The process is a two-step clustering process. First, the constellation diagram of the received signal is used by a subtractive clustering algorithm based on SNR to figure out the initial number of clustering centers. The PSO-FCM algorithm then refines these centers to improve precision. Accurate signal classification and identification are achieved by evaluating the relative sizes of the radii around the cluster centers within the MQAM constellation diagram and determining the modulation order. The results indicate that the SC-based PSO-FCM algorithm outperforms the conventional FCM in clustering effectiveness, notably enhancing modulation recognition rates in low-SNR conditions, when evaluated against a variety of QAM signals ranging from 4QAM to 64QAM. Full article
Show Figures

Figure 1

24 pages, 16129 KiB  
Article
Public Health Implications for Effective Community Interventions Based on Hospital Patient Data Analysis Using Deep Learning Technology in Indonesia
Information 2024, 15(1), 41; https://doi.org/10.3390/info15010041 - 11 Jan 2024
Viewed by 332
Abstract
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of [...] Read more.
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of the health problems faced by a population, including disease prevalence, risk factors, and other determinants of health. This work aims to explore the potential of hospital patient data analysis as a valuable tool for understanding community implications and deriving insights for effective community health interventions. The study recognises the significance of harnessing the vast amount of data generated within hospital settings to inform population-level health strategies. The methodology employed in this study involves the collection and analysis of deidentified patient data from a representative sample of a hospital in Indonesia. Various data analysis techniques, such as statistical modelling, data mining, and machine learning algorithms, are utilised to identify patterns, trends, and associations within the data. A program written in Python is used to analyse patient data in a hospital for five years, from 2018 to 2022. These findings are then interpreted within the context of public health implications, considering factors such as disease prevalence, socioeconomic determinants, and healthcare utilisation patterns. The results of the data analysis provide valuable insights into the public health implications of hospital patient data. The research also covers predictions for the patient data to the hospital based on disease, age, and geographical residence. The research prediction shows that, in the year 2023, the number of patients will not be considerably affected by the infection, but in March to April 2024 the number will increase significantly up to 10,000 patients due to the trend in the previous year at the end of 2022. These recommendations encompass targeted prevention strategies, improved healthcare delivery models, and community engagement initiatives. The research emphasises the importance of collaboration between healthcare providers, policymakers, and community stakeholders in implementing and evaluating these interventions. Full article
(This article belongs to the Special Issue Advances in AI for Health and Medical Applications)
Show Figures

Figure 1

27 pages, 820 KiB  
Article
Secure Genomic String Search with Parallel Homomorphic Encryption
Information 2024, 15(1), 40; https://doi.org/10.3390/info15010040 - 11 Jan 2024
Viewed by 266
Abstract
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their [...] Read more.
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their broader application. This project aims to enhance FHE systems through an efficient parallel framework, specifically building upon the existing torus FHE (TFHE) system chillotti2016faster. The TFHE system was chosen for its superior bootstrapping computations and precise results for countless Boolean gate evaluations, such as AND and XOR. Our first approach was to expand upon the gate operations within the current system, shifting towards algebraic circuits, and using graphics processing units (GPUs) to manage cryptographic operations in parallel. Then, we implemented this GPU-parallel FHE framework into a needed genomic data operation, specifically string search. We utilized popular string distance metrics (hamming distance, edit distance, set maximal matches) to ascertain the disparities between multiple genomic sequences in a secure context with all data and operations occurring under encryption. Our experimental data revealed that our GPU implementation vastly outperforms the former method, providing a 20-fold speedup for any 32-bit Boolean operation and a 14.5-fold increase for multiplications.This paper introduces unique enhancements to existing FHE cryptographic systems using GPUs and additional algorithms to quicken fundamental computations. Looking ahead, the presented framework can be further developed to accommodate more complex, real-world applications. Full article
(This article belongs to the Special Issue Digital Privacy and Security)
Show Figures

Figure 1

20 pages, 3183 KiB  
Article
Time Series Forecasting Utilizing Automated Machine Learning (AutoML): A Comparative Analysis Study on Diverse Datasets
Information 2024, 15(1), 39; https://doi.org/10.3390/info15010039 - 11 Jan 2024
Viewed by 381
Abstract
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into [...] Read more.
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into these tools in the context of time series analysis, which is essential for forecasting future trends from historical data. We evaluate three prominent AutoML tools—AutoGluon, Auto-Sklearn, and PyCaret—across various metrics, employing diverse datasets that include Bitcoin and COVID-19 data. The results reveal that the performance of each tool is highly dependent on the specific dataset and its ability to manage the complexities of time series data. This thorough investigation not only demonstrates the strengths and limitations of each AutoML tool but also highlights the criticality of dataset-specific considerations in time series analysis. Offering valuable insights for both practitioners and researchers, this study emphasizes the ongoing need for research and development in this specialized area. It aims to serve as a reference for organizations dealing with time series datasets and a guiding framework for future academic research in enhancing the application of AutoML tools for time series forecasting and analysis. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

37 pages, 6465 KiB  
Review
Unmanned Autonomous Intelligent System in 6G Non-Terrestrial Network
Information 2024, 15(1), 38; https://doi.org/10.3390/info15010038 - 11 Jan 2024
Viewed by 328
Abstract
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia [...] Read more.
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia and industry. These systems have various applications in autonomous driving, logistics, area surveillance, and medical services. With the rapid evolution of information and communication technology (ICT), 5G and beyond-5G communication have enabled numerous intelligent applications through the comprehensive utilization of advanced NTN communication technology and artificial intelligence. To meet the demands of complex tasks in remote or communication-challenged areas, there is an urgent need for reliable, ultra-low latency communication networks to enable unmanned autonomous intelligent systems for applications such as localization, navigation, perception, decision-making, and motion planning. However, in remote areas, reliable communication coverage is not available, which poses a significant challenge for intelligent systems applications. The rapid development of non-terrestrial networks (NTNs) communication has shed new light on intelligent applications that require ubiquitous network connections in space, air, ground, and sea. However, challenges arise when using NTN technology in unmanned autonomous intelligent systems. Our research examines the advancements and obstacles in academic research and industry applications of NTN technology concerning UAIS, which is supported by unmanned aerial vehicles (UAV) and other low-altitude platforms. Nevertheless, edge computing and cloud computing are crucial for unmanned autonomous intelligent systems, which also necessitate distributed computation architectures for computationally intensive tasks and massive data offloading. This paper presents a comprehensive analysis of the opportunities and challenges of unmanned autonomous intelligent systems in UAV NTN, along with NTN-based unmanned autonomous intelligent systems and their applications. A field trial case study is presented to demonstrate the application of NTN in UAIS. Full article
Show Figures

Figure 1

23 pages, 2929 KiB  
Review
Parametric and Nonparametric Machine Learning Techniques for Increasing Power System Reliability: A Review
Information 2024, 15(1), 37; https://doi.org/10.3390/info15010037 - 11 Jan 2024
Viewed by 354
Abstract
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods [...] Read more.
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods to minimize downtime in power networks and reduce maintenance costs. In addition to traditional statistical methods, modern technologies such as machine learning have become increasingly common for enhancing system reliability and customer satisfaction. The primary objective of this study is to review parametric and nonparametric machine learning techniques and their applications in relation to maintenance-related aspects of power distribution system assets, including (1) distribution lines, (2) transformers, and (3) insulators. Compared to other reviews, this study offers a unique perspective on machine learning algorithms and their predictive capabilities in relation to the critical components of power distribution systems. Full article
Show Figures

Figure 1

16 pages, 5487 KiB  
Article
Rapid Forecasting of Cyber Events Using Machine Learning-Enabled Features
Information 2024, 15(1), 36; https://doi.org/10.3390/info15010036 - 11 Jan 2024
Viewed by 397
Abstract
In recent years, there has been a notable surge in both the complexity and volume of targeted cyber attacks, largely due to heightened vulnerabilities in widely adopted technologies. The Prediction and detection of early attacks are vital to mitigating potential risks from cyber [...] Read more.
In recent years, there has been a notable surge in both the complexity and volume of targeted cyber attacks, largely due to heightened vulnerabilities in widely adopted technologies. The Prediction and detection of early attacks are vital to mitigating potential risks from cyber attacks and network resilience. With the rapid increase of digital data and the increasing complexity of cyber attacks, big data has become a crucial tool for intrusion detection and forecasting. By leveraging the capabilities of unstructured big data, intrusion detection and forecasting systems can become more effective in detecting and preventing cyber attacks and anomalies. While some progress has been made on attack prediction, little attention has been given to forecasting cyber events based on time series and unstructured big data. In this research, we used the CSE-CIC-IDS2018 dataset, a comprehensive dataset containing several attacks on a realistic network. Then we used time-series forecasting techniques to construct time-series models with tuned parameters to assess the effectiveness of these techniques, which include Sequential Minimal Optimisation for regression (SMOreg), linear regression and Long Short-Term Memory (LSTM) to forecast the cyber events. We used machine learning algorithms such as Naive Bayes and random forest to evaluate the performance of the models. The best performance results of 90.4% were achieved with Support Vector Machine (SVM) and random forest. Additionally, Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) metrics were used to evaluate forecasted event performance. SMOreg’s forecasted events yielded the lowest MAE, while those from linear regression exhibited the lowest RMSE. This work is anticipated to contribute to effective cyber threat detection, aiming to reduce security breaches within critical infrastructure. Full article
(This article belongs to the Special Issue Emerging Research on Neural Networks and Anomaly Detection)
Show Figures

Figure 1

14 pages, 1632 KiB  
Article
Engineering Four-Qubit Fuel States for Protecting Quantum Thermalization Machine from Decoherence
Information 2024, 15(1), 35; https://doi.org/10.3390/info15010035 - 10 Jan 2024
Viewed by 422
Abstract
Decoherence is a major issue in quantum information processing, degrading the performance of tasks or even precluding them. Quantum error-correcting codes, creating decoherence-free subspaces, and the quantum Zeno effect are among the major means for protecting quantum systems from decoherence. Increasing the number [...] Read more.
Decoherence is a major issue in quantum information processing, degrading the performance of tasks or even precluding them. Quantum error-correcting codes, creating decoherence-free subspaces, and the quantum Zeno effect are among the major means for protecting quantum systems from decoherence. Increasing the number of qubits of a quantum system to be utilized in a quantum information task as a resource expands the quantum state space. This creates the opportunity to engineer the quantum state of the system in a way that improves the performance of the task and even to protect the system against decoherence. Here, we consider a quantum thermalization machine and four-qubit atomic states as its resource. Taking into account the realistic conditions such as cavity loss and atomic decoherence due to ambient temperature, we design a quantum state for the atomic resource as a classical mixture of Dicke and W states. We show that using the mixture probability as the control parameter, the negative effects of the inevitable decoherence on the machine performance almost vanish. Our work paves the way for optimizing resource systems consisting of a higher number of atoms. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Figure 1

22 pages, 974 KiB  
Article
Streamlining Temporal Formal Verification over Columnar Databases
Information 2024, 15(1), 34; https://doi.org/10.3390/info15010034 - 08 Jan 2024
Viewed by 430
Abstract
Recent findings demonstrate how database technology enhances the computation of formal verification tasks expressible in linear time logic for finite traces (LTLf). Human-readable declarative languages also help the common practitioner to express temporal constraints in a straightforward and accessible language. Notwithstanding [...] Read more.
Recent findings demonstrate how database technology enhances the computation of formal verification tasks expressible in linear time logic for finite traces (LTLf). Human-readable declarative languages also help the common practitioner to express temporal constraints in a straightforward and accessible language. Notwithstanding the former, this technology is in its infancy, and therefore, few optimization algorithms are known for dealing with massive amounts of information audited from real systems. We, therefore, present four novel algorithms subsuming entire LTLf expressions while outperforming previous state-of-the-art implementations on top of KnoBAB, thus postulating the need for the corresponding, leading to the formulation of novel xtLTLf-derived algebraic operators. Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Show Figures

Figure 1

Back to TopTop