Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,132)

Search Parameters:
Journal = Future Internet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3483 KiB  
Article
Digital Communication and Social Organizations: An Evaluation of the Communication Strategies of the Most-Valued NGOs Worldwide
Future Internet 2024, 16(1), 26; https://doi.org/10.3390/fi16010026 - 13 Jan 2024
Viewed by 191
Abstract
The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided [...] Read more.
The communication of organizations with their audiences has undergone changes thanks to the Internet. Non-Governmental Organizations (NGOs), as influential groups, are no exception, as much of their activism takes place through grassroots digital lobbying. The consolidation of Web 2.0 has not only provided social organizations with a new and powerful tool for disseminating information but also brought about significant changes in the relationship between nonprofit organizations and their diverse audiences. This has facilitated and improved interaction between them. The purpose of this article is to analyze the level of interactivity implemented on the websites of leading NGOs worldwide and their presence on social networks, with the aim of assessing whether these influential groups are moving towards more dialogic systems in relation to their audience. The results reveal that NGOs have a high degree of interactivity in the tools used to present and disseminate information on their websites. However, not all maintain the same level of interactivity in the resources available for interaction with Internet users, as very few have high interactivity regarding bidirectional resources. It was concluded that international non-governmental organizations still suffer from certain shortcomings in the strategic management of digital communication on their web platforms, while, on the other hand, a strong presence can be noted on the most-popular social networks. Full article
(This article belongs to the Special Issue Social Internet of Things (SIoT))
Show Figures

Figure 1

13 pages, 926 KiB  
Article
Classification Tendency Difference Index Model for Feature Selection and Extraction in Wireless Intrusion Detection
Future Internet 2024, 16(1), 25; https://doi.org/10.3390/fi16010025 - 12 Jan 2024
Viewed by 209
Abstract
In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak [...] Read more.
In detecting large-scale attacks, deep neural networks (DNNs) are an effective approach based on high-quality training data samples. Feature selection and feature extraction are the primary approaches for data quality enhancement for high-accuracy intrusion detection. However, their enhancement root causes usually present weak relationships to the differences between normal and attack behaviors in the data samples. Thus, we propose a Classification Tendency Difference Index (CTDI) model for feature selection and extraction in intrusion detection. The CTDI model consists of three indexes: Classification Tendency Frequency Difference (CTFD), Classification Tendency Membership Difference (CTMD), and Classification Tendency Distance Difference (CTDD). In the dataset, each feature has many feature values (FVs). In each FV, the normal and attack samples indicate the FV classification tendency, and CTDI shows the classification tendency differences between the normal and attack samples. CTFD is the frequency difference between the normal and attack samples. By employing fuzzy C means (FCM) to establish the normal and attack clusters, CTMD is the membership difference between the clusters, and CTDD is the distance difference between the cluster centers. CTDI calculates the index score in each FV and summarizes the scores of all FVs in the feature as the feature score for each of the three indexes. CDTI adopts an Auto Encoder for feature extraction to generate new features from the dataset and calculate the three index scores for the new features. CDTI sorts the original and new features for each of the three indexes to select the best features. The selected CDTI features indicate the best classification tendency differences between normal and attack samples. The experiment results demonstrate that the CDTI features achieve better detection accuracy as classified by DNN for the Aegean WiFi Intrusion Dataset than their related works, and the detection enhancements are based on the improved classification tendency differences in the CDTI features. Full article
(This article belongs to the Special Issue Information and Future Internet Security, Trust and Privacy II)
Show Figures

Figure 1

34 pages, 4069 KiB  
Article
Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security
Future Internet 2024, 16(1), 24; https://doi.org/10.3390/fi16010024 - 11 Jan 2024
Viewed by 193
Abstract
A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in [...] Read more.
A national population census is instrumental in offering a holistic view of a country’s progress, directly influencing policy formulation and strategic planning. Potential flaws in the census system can have detrimental impacts on national development. Our prior research has pinpointed various deficiencies in current census methodologies, including inadequate population coverage, racial and ethnic discrimination, and challenges related to data privacy, security, and distribution. This study aims to address the “missing persons” challenge in the national census population and housing system. The integration of blockchain technology emerges as a promising solution for addressing these identified issues, enhancing the integrity and efficacy of census processes. Building upon our earlier research which examined the national census system of Pakistan, we propose an architecture design incorporating Hyperledger Fabric, performing system sizing for the entire nation count. The Blockchain-Based Implementation of National Census as a Supplementary Instrument for Enhanced Transparency, Accountability, Privacy, and Security (BINC-TAPS) seeks to provide a robust, transparent, scalable, immutable, and tamper-proof solution for conducting national population and housing censuses, while also fostering socio-economic advancements. This paper presents a comprehensive overview of our research, with a primary focus on the implementation of the blockchain-based proposed solution, including prototype testing and the resulting outcomes. Full article
Show Figures

Figure 1

14 pages, 1587 KiB  
Article
Future Sustainable Internet Energy-Defined Networking
Future Internet 2024, 16(1), 23; https://doi.org/10.3390/fi16010023 - 09 Jan 2024
Viewed by 276
Abstract
This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding [...] Read more.
This paper presents a comprehensive set of design methods for making future Internet networking fully energy-aware and sustainably minimizing and managing the energy footprint. It includes (a) 41 energy-aware design methods, grouped into Service Operations Support, Management Operations Support, Compute Operations Support, Connectivity/Forwarding Operations Support, Traffic Engineering Methods, Architectural Support for Energy Instrumentation, and Network Configuration; (b) energy consumption models and energy metrics are identified and specified. It specifies the requirements for energy-defined network compliance, which include energy-measurable network devices with the support of several control messages: registration, discovery, provisioning, discharge, monitoring, synchronization, flooding, performance, and pushback. Full article
Show Figures

Figure 1

31 pages, 1418 KiB  
Article
A Novel Semantic IoT Middleware for Secure Data Management: Blockchain and AI-Driven Context Awareness
Future Internet 2024, 16(1), 22; https://doi.org/10.3390/fi16010022 - 07 Jan 2024
Viewed by 365
Abstract
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes [...] Read more.
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes a novel Semantic IoT Middleware (SIM) for healthcare. The architecture of this middleware comprises the following main processes: data generation, semantic annotation, security encryption, and semantic operations. The data generation module facilitates seamless data and event sourcing, while the Semantic Annotation Component assigns structured vocabulary for uniformity. SIM adopts blockchain technology to provide enhanced data security, and its layered approach ensures robust interoperability and intuitive user-centric operations for IoT systems. The security encryption module offers data protection, and the semantic operations module underpins data processing and integration. A distinctive feature of this middleware is its proficiency in service integration, leveraging semantic descriptions augmented by user feedback. Additionally, SIM integrates artificial intelligence (AI) feedback mechanisms to continuously refine and optimise the middleware’s operational efficiency. Full article
Show Figures

Figure 1

19 pages, 858 KiB  
Article
A Comprehensive Study and Analysis of the Third Generation Partnership Project’s 5G New Radio for Vehicle-to-Everything Communication
Future Internet 2024, 16(1), 21; https://doi.org/10.3390/fi16010021 - 06 Jan 2024
Viewed by 639
Abstract
Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to [...] Read more.
Recently, the Third Generation Partnership Project (3GPP) introduced new radio (NR) technology for vehicle-to-everything (V2X) communication to enable delay-sensitive and bandwidth-hungry applications in vehicular communication. The NR system is strategically crafted to complement the existing long-term evolution (LTE) cellular-vehicle-to-everything (C-V2X) infrastructure, particularly to support advanced services such as the operation of automated vehicles. It is widely anticipated that the fifth-generation (5G) NR system will surpass LTE C-V2X in terms of achieving superior performance in scenarios characterized by high throughput, low latency, and enhanced reliability, especially in the context of congested traffic conditions and a diverse range of vehicular applications. This article will provide a comprehensive literature review on vehicular communications from dedicated short-range communication (DSRC) to NR V2X. Subsequently, it delves into a detailed examination of the challenges and opportunities inherent in NR V2X technology. Finally, we proceed to elucidate the process of creating and analyzing an open-source 5G NR V2X module in network simulation-3 (ns-3) and then demonstrate the NR V2X performance in terms of different key performance indicators implemented through diverse operational scenarios. Full article
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Joint Beam-Forming Optimization for Active-RIS-Assisted Internet-of-Things Networks with SWIPT
Future Internet 2024, 16(1), 20; https://doi.org/10.3390/fi16010020 - 06 Jan 2024
Viewed by 326
Abstract
Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve [...] Read more.
Network energy resources are limited in communication systems, which may cause energy shortages in mobile devices at the user end. Active Reconfigurable Intelligent Surfaces (A-RIS) not only have phase modulation properties but also enhance the signal strength; thus, they are expected to solve the energy shortage problem experience at the user end in 6G communications. In this paper, a resource allocation algorithm for maximizing the sum of harvested energy is proposed for an active RIS-assisted Simultaneous Wireless Information and Power Transfer (SWIPT) system to solve the problem of low performance of harvested energy for users due to multiplicative fading. First, in the active RIS-assisted SWIPT system using a power splitting architecture to achieve information and energy co-transmission, the joint resource allocation problem is constructed with the objective function of maximizing the sum of the collected energy of all users, under the constraints of signal-to-noise ratio, active RIS and base station transmit power, and power splitting factors. Second, the considered non-convex problem can be turned into a standard convex problem by using alternating optimization, semi-definite relaxation, successive convex approximation, penalty function, etc., and then an alternating iterative algorithm for harvesting energy is proposed. The proposed algorithm splits the problem into two sub-problems and then performs iterative optimization separately, and then the whole is alternately optimized to obtain the optimal solution. Simulation results show that the proposed algorithm improves the performance by 45.2% and 103.7% compared to the passive RIS algorithm and the traditional without-RIS algorithm, respectively, at the maximum permissible transmitting power of 45 dBm at the base station. Full article
(This article belongs to the Special Issue Moving towards 6G Wireless Technologies)
Show Figures

Figure 1

17 pages, 3053 KiB  
Article
Proximal Policy Optimization for Efficient D2D-Assisted Computation Offloading and Resource Allocation in Multi-Access Edge Computing
Future Internet 2024, 16(1), 19; https://doi.org/10.3390/fi16010019 - 02 Jan 2024
Viewed by 875
Abstract
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. [...] Read more.
In the advanced 5G and beyond networks, multi-access edge computing (MEC) is increasingly recognized as a promising technology, offering the dual advantages of reducing energy utilization in cloud data centers while catering to the demands for reliability and real-time responsiveness in end devices. However, the inherent complexity and variability of MEC networks pose significant challenges in computational offloading decisions. To tackle this problem, we propose a proximal policy optimization (PPO)-based Device-to-Device (D2D)-assisted computation offloading and resource allocation scheme. We construct a realistic MEC network environment and develop a Markov decision process (MDP) model that minimizes time loss and energy consumption. The integration of a D2D communication-based offloading framework allows for collaborative task offloading between end devices and MEC servers, enhancing both resource utilization and computational efficiency. The MDP model is solved using the PPO algorithm in deep reinforcement learning to derive an optimal policy for offloading and resource allocation. Extensive comparative analysis with three benchmarked approaches has confirmed our scheme’s superior performance in latency, energy consumption, and algorithmic convergence, demonstrating its potential to improve MEC network operations in the context of emerging 5G and beyond technologies. Full article
Show Figures

Figure 1

23 pages, 1647 KiB  
Article
Controllable Queuing System with Elastic Traffic and Signals for Resource Capacity Planning in 5G Network Slicing
Future Internet 2024, 16(1), 18; https://doi.org/10.3390/fi16010018 - 31 Dec 2023
Viewed by 458
Abstract
Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically [...] Read more.
Fifth-generation (5G) networks provide network slicing capabilities, enabling the deployment of multiple logically isolated network slices on a single infrastructure platform to meet specific requirements of users. This paper focuses on modeling and analyzing resource capacity planning and reallocation for network slicing, specifically between two providers transmitting elastic traffic, such during as web browsing. A controller determines the need for resource reallocation and plans new resource capacity accordingly. A Markov decision process is employed in a controllable queuing system to find the optimal resource capacity for each provider. The reward function incorporates three network slicing principles: maximum matching for equal resource partitioning, maximum share of signals resulting in resource reallocation, and maximum resource utilization. To efficiently compute the optimal resource capacity planning policy, we developed an iterative algorithm that begins with maximum resource utilization as the starting point. Through numerical demonstrations, we show the optimal policy and metrics of resource reallocation for two services: web browsing and bulk data transfer. The results highlight fast convergence within three iterations and the effectiveness of the balanced three-principle approach in resource capacity planning for 5G network slicing. Full article
(This article belongs to the Special Issue Performance and QoS Issues of 5G Wireless Networks and Beyond)
Show Figures

Figure 1

24 pages, 2445 KiB  
Article
Internet-of-Things Traffic Analysis and Device Identification Based on Two-Stage Clustering in Smart Home Environments
Future Internet 2024, 16(1), 17; https://doi.org/10.3390/fi16010017 - 31 Dec 2023
Viewed by 886
Abstract
Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and [...] Read more.
Smart home environments, which consist of various Internet of Things (IoT) devices to support and improve our daily lives, are expected to be widely adopted in the near future. Owing to a lack of awareness regarding the risks associated with IoT devices and challenges in replacing or the updating their firmware, adequate security measures have not been implemented. Instead, IoT device identification methods based on traffic analysis have been proposed. Since conventional methods process and analyze traffic data simultaneously, bias in the occurrence rate of traffic patterns has a negative impact on the analysis results. Therefore, this paper proposes an IoT traffic analysis and device identification method based on two-stage clustering in smart home environments. In the first step, traffic patterns are extracted by clustering IoT traffic at a local gateway located in each smart home and subsequently sent to a cloud server. In the second step, the cloud server extracts common traffic units to represent IoT traffic by clustering the patterns obtained in the first step. Two-stage clustering can reduce the impact of data bias, because each cluster extracted in the first clustering is summarized as one value and used as a single data point in the second clustering, regardless of the occurrence rate of traffic patterns. Through the proposed two-stage clustering method, IoT traffic is transformed into time series vector data that consist of common unit patterns and can be identified based on time series representations. Experiments using public IoT traffic datasets indicated that the proposed method could identify 21 IoTs devices with an accuracy of 86.9%. Therefore, we can conclude that traffic analysis using two-stage clustering is effective for improving the clustering quality, device identification, and implementation in distributed environments. Full article
Show Figures

Figure 1

22 pages, 828 KiB  
Review
An Analysis of Methods and Metrics for Task Scheduling in Fog Computing
Future Internet 2024, 16(1), 16; https://doi.org/10.3390/fi16010016 - 30 Dec 2023
Viewed by 505
Abstract
The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the [...] Read more.
The Internet of Things (IoT) uptake brought a paradigm shift in application deployment. Indeed, IoT applications are not centralized in cloud data centers, but the computation and storage are moved close to the consumers, creating a computing continuum between the edge of the network and the cloud. This paradigm shift is called fog computing, a concept introduced by Cisco in 2012. Scheduling applications in this decentralized, heterogeneous, and resource-constrained environment is challenging. The task scheduling problem in fog computing has been widely explored and addressed using many approaches, from traditional operational research to heuristics and machine learning. This paper aims to analyze the literature on task scheduling in fog computing published in the last five years to classify the criteria used for decision-making and the technique used to solve the task scheduling problem. We propose a taxonomy of task scheduling algorithms, and we identify the research gaps and challenges. Full article
Show Figures

Figure 1

27 pages, 2667 KiB  
Article
Resource Indexing and Querying in Large Connected Environments
Future Internet 2024, 16(1), 15; https://doi.org/10.3390/fi16010015 - 30 Dec 2023
Viewed by 407
Abstract
The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource [...] Read more.
The proliferation of sensor and actuator devices in Internet of things (IoT) networks has garnered significant attention in recent years. However, the increasing number of IoT devices, and the corresponding resources, has introduced various challenges, particularly in indexing and querying. In essence, resource management has become more complex due to the non-uniform distribution of related devices and their limited capacity. Additionally, the diverse demands of users have further complicated resource indexing. This paper proposes a distributed resource indexing and querying algorithm for large connected environments, specifically designed to address the challenges posed by IoT networks. The algorithm considers both the limited device capacity and the non-uniform distribution of devices, acknowledging that devices cannot store information about the entire environment. Furthermore, it places special emphasis on uncovered zones, to reduce the response time of queries related to these areas. Moreover, the algorithm introduces different types of queries, to cater to various user needs, including fast queries and urgent queries suitable for different scenarios. The effectiveness of the proposed approach was evaluated through extensive experiments covering index creation, coverage, and query execution, yielding promising and insightful results. Full article
Show Figures

Figure 1

26 pages, 1226 KiB  
Article
1-D Convolutional Neural Network-Based Models for Cooperative Spectrum Sensing
Future Internet 2024, 16(1), 14; https://doi.org/10.3390/fi16010014 - 29 Dec 2023
Viewed by 398
Abstract
Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest [...] Read more.
Spectrum sensing is an essential function of cognitive radio technology that can enable the reuse of available radio resources by so-called secondary users without creating harmful interference with licensed users. The application of machine learning techniques to spectrum sensing has attracted considerable interest in the literature. In this contribution, we study cooperative spectrum sensing in a cognitive radio network where multiple secondary users cooperate to detect a primary user. We introduce multiple cooperative spectrum sensing schemes based on a deep neural network, which incorporate a one-dimensional convolutional neural network and a long short-term memory network. The primary objective of these schemes is to effectively learn the activity patterns of the primary user. The scenario of an imperfect transmission channel is considered for service messages to demonstrate the robustness of the proposed model. The performance of the proposed methods is evaluated with the receiver operating characteristic curve, the probability of detection for various SNR levels and the computational time. The simulation results confirm the effectiveness of the bidirectional long short-term memory-based method, surpassing the performance of the other proposed schemes and the current state-of-the-art methods in terms of detection probability, while ensuring a reasonable online detection time. Full article
Show Figures

Figure 1

20 pages, 892 KiB  
Article
Vnode: Low-Overhead Transparent Tracing of Node.js-Based Microservice Architectures
Future Internet 2024, 16(1), 13; https://doi.org/10.3390/fi16010013 - 29 Dec 2023
Viewed by 715
Abstract
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. [...] Read more.
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes. Full article
Show Figures

Figure 1

21 pages, 1096 KiB  
Article
Evaluating Embeddings from Pre-Trained Language Models and Knowledge Graphs for Educational Content Recommendation
Future Internet 2024, 16(1), 12; https://doi.org/10.3390/fi16010012 - 29 Dec 2023
Viewed by 573
Abstract
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are [...] Read more.
Educational content recommendation is a cornerstone of AI-enhanced learning. In particular, to facilitate navigating the diverse learning resources available on learning platforms, methods are needed for automatically linking learning materials, e.g., in order to recommend textbook content based on exercises. Such methods are typically based on semantic textual similarity (STS) and the use of embeddings for text representation. However, it remains unclear what types of embeddings should be used for this task. In this study, we carry out an extensive empirical evaluation of embeddings derived from three different types of models: (i) static embeddings trained using a concept-based knowledge graph, (ii) contextual embeddings from a pre-trained language model, and (iii) contextual embeddings from a large language model (LLM). In addition to evaluating the models individually, various ensembles are explored based on different strategies for combining two models in an early vs. late fusion fashion. The evaluation is carried out using digital textbooks in Swedish for three different subjects and two types of exercises. The results show that using contextual embeddings from an LLM leads to superior performance compared to the other models, and that there is no significant improvement when combining these with static embeddings trained using a knowledge graph. When using embeddings derived from a smaller language model, however, it helps to combine them with knowledge graph embeddings. The performance of the best-performing model is high for both types of exercises, resulting in a mean Recall@3 of 0.96 and 0.95 and a mean MRR of 0.87 and 0.86 for quizzes and study questions, respectively, demonstrating the feasibility of using STS based on text embeddings for educational content recommendation. The ability to link digital learning materials in an unsupervised manner—relying only on readily available pre-trained models—facilitates the development of AI-enhanced learning. Full article
(This article belongs to the Special Issue Deep Learning in Recommender Systems)
Show Figures

Figure 1

Back to TopTop