The connection patterns of neural circuits in the brain form a complex network. Collective signalling within the network manifests as patterned neural activity and is thought to support human cognition and adaptive behaviour. Recent technological advances permit macroscale reconstructions of biological brain networks. These maps, termed connectomes, display multiple non-random architectural features, including heavy-tailed degree distributions, segregated communities and a densely interconnected core. Yet, how computation and functional specialization emerge from network architecture remains unknown. Here we reconstruct human brain connectomes using in vivo diffusion-weighted imaging and use reservoir computing to implement connectomes as artificial neural networks. We then train these neuromorphic networks to learn a memory-encoding task. We show that biologically realistic neural architectures perform best when they display critical dynamics. We find that performance is driven by network topology and that the modular organization of intrinsic networks is computationally relevant. We observe a prominent interaction between network structure and dynamics throughout, such that the same underlying architecture can support a wide range of memory capacity values as well as different functions (encoding or decoding), depending on the dynamical regime the network is in. This work opens new opportunities to discover how the network organization of the brain optimizes cognitive capacity.
DeepMind presented remarkably accurate predictions at the recent CASP14 protein structure prediction assessment conference. We explored network architectures incorporating related ideas and obtained the best performance with a three-track network in which information at the 1D sequence level, the 2D distance map level, and the 3D coordinate level is successively transformed and integrated. The three-track network produces structure predictions with accuracies approaching those of DeepMind in CASP14, enables the rapid solution of challenging X-ray crystallography and cryo-EM structure modeling problems, and provides insights into the functions of proteins of currently unknown structure. The network also enables rapid generation of accurate protein-protein complex models from sequence information alone, short circuiting traditional approaches which require modeling of individual subunits followed by docking. We make the method available to the scientific community to speed biological research.
Purpose To develop an algorithm to classify postcontrast T1-weighted MRI scans by tumor classes (high grade glioma, low grade glioma, brain metastases, meningioma, pituitary adenoma, and acoustic neuroma) and a healthy class. Materials and Methods In this retrospective study, preoperative postcontrast T1-weighted MRI scans from four publicly available datasets, Brain Tumor Image Segmentation (n = 378), LGG-1p19q dataset (n = 145), The Cancer Genome Atlas Glioblastoma Multiforme (n = 141), The Cancer Genome Atlas Low Grade Glioma (n = 68) and an internal clinical dataset (n = 1373) were used. In all a total of 2105 scans were split into a training (n = 1396), internal test set (n = 361), and an external test dataset (n = 348). A convolutional neural network was trained to classify tumor type and to discriminate between healthy scans and scans with tumors. The performance of the model was evaluated using cross-validation, internal testing and external testing. Feature maps were plotted to visualize network attention. Accuracy, positive predictive value (PPV), negative predictive value (NPV), sensitivity, specificity, F1 score, areas under the receiver operating characteristic curve (AUC) and precision-recall curve (AUPRC) were calculated. Results On the internal test dataset, across the seven different classes, the sensitivity, PPV, AUC, and AUPRC ranged from 87%–100%, 85%–100%, 0.98–1.00, and 0.91–1.00, respectively. On the external data, they ranged from 91%–97%, 73%–99%, 0.97–0.98 and 0.9–1.0, respectively. Conclusion The developed model was capable of classifying postcontrast T1-weighted MRI scans of different intracranial tumor types and discriminate pathologic from healthy scans.
Brain–computer interfaces (BCIs) provide bidirectional communication between the brain and output devices that translate user intent into function. Among the different brain imaging techniques used to operate BCIs, electroencephalography (EEG) constitutes the preferred method of choice, owing to its relative low cost, ease of use, high temporal resolution, and noninvasiveness. In recent years, significant progress in wearable technologies and computational intelligence has greatly enhanced the performance and capabilities of EEG-based BCIs (eBCIs) and propelled their migration out of the laboratory and into real-world environments. This rapid translation constitutes a paradigm shift in human–machine interaction that will deeply transform different industries in the near future, including healthcare and wellbeing, entertainment, security, education, and marketing. In this contribution, the state-of-the-art in wearable biosensing is reviewed, focusing on the development of novel electrode interfaces for long term and noninvasive EEG monitoring. Commercially available EEG platforms are surveyed, and a comparative analysis is presented based on the benefits and limitations they provide for eBCI development. Emerging applications in neuroscientific research and future trends related to the widespread implementation of eBCIs for medical and nonmedical uses are discussed. Finally, a commentary on the ethical, social, and legal concerns associated with this increasingly ubiquitous technology is provided, as well as general recommendations to address key issues related to mainstream consumer adoption.
Deep Learning has improved multi-fold in recent years and it has been playing a great role in image classification which also includes medical imaging. Convolutional Neural Networks (CNNs) have been performing well in detecting many diseases including coronary artery disease, malaria, Alzheimer’s disease, different dental diseases, and Parkinson’s disease. Like other cases, CNN has a substantial prospect in detecting COVID-19 patients with medical images like chest X-rays and CTs. Coronavirus or COVID-19 has been declared a global pandemic by the World Health Organization (WHO). As of 8 August 2020, the total COVID-19 confirmed cases are 19.18 M and deaths are 0.716 M worldwide. Detecting Coronavirus positive patients is very important in preventing the spread of this virus. On this conquest, a CNN model is proposed to detect COVID-19 patients from chest X-ray images. Two more CNN models with different number of convolution layers and three other models based on pretrained ResNet50, VGG-16 and VGG-19 are evaluated with comparative analytical analysis. All six models are trained and validated with Dataset 1 and Dataset 2. Dataset 1 has 201 normal and 201 COVID-19 chest X-rays whereas Dataset 2 is comparatively larger with 659 normal and 295 COVID-19 chest X-ray images. The proposed model performs with an accuracy of 98.3% and a precision of 96.72% with Dataset 2. This model gives the Receiver Operating Characteristic (ROC) curve area of 0.983 and F1-score of 98.3 with Dataset 2. Moreover, this work shows a comparative analysis of how change in convolutional layers and increase in dataset affect classifying performances.
This article reviewed the state-of-the-art applications of the Internet of things (IoT) technology applied in homes for making them smart, automated, and digitalized in many respects. The literature presented various applications, systems, or methods and reported the results of using IoT, artificial intelligence (AI), and geographic information system (GIS) at homes. Because the technology has been advancing and users are experiencing IoT boom for smart built environment applications, especially smart homes and smart energy systems, it is necessary to identify the gaps, relation between current methods, and provide a coherent instruction of the whole process of designing smart homes. This article reviewed relevant papers within databases, such as Scopus, including journal papers published in between 2010 and 2019. These papers were then analyzed in terms of bibliography and content to identify more related systems, practices, and contributors. A designed systematic review method was used to identify and select the relevant papers, which were then reviewed for their content by means of coding. The presented systematic critical review focuses on systems developed and technologies used for smart homes. The main question is ”What has been learned from a decade trailing smart system developments in different fields?”. We found that there is a considerable gap in the integration of AI and IoT and the use of geospatial data in smart home development. It was also found that there is a large gap in the literature in terms of limited integrated systems for energy efficiency and aged care system development. This article would enable researchers and professionals to fully understand those gaps in IoT-based environments and suggest ways to fill the gaps while designing smart homes where users have a higher level of thermal comfort while saving energy and greenhouse gas emissions. This article also raised new challenging questions on how IoT and existing developed systems could be improved and be further developed to address other issues of energy saving, which can steer the research direction to full smart systems. This would significantly help to design fully automated assistive systems to improve quality of life and decrease energy consumption.
BACKGROUND Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication. METHODS We implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech in a person with anarthria (the loss of the ability to articulate speech) and spastic quadriparesis caused by a brain-stem stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. We used deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity. We applied these computational models, as well as a natural-language model that yielded next-word probabilities given the preceding words in a sequence, to decode full sentences as the participant attempted to say them. RESULTS We decoded sentences from the participant’s cortical activity in real time at a median rate of 15.2 words per minute, with a median word error rate of 25.6%. In post hoc analyses, we detected 98% of the attempts by the participant to produce individual words, and we classified words with 47.1% accuracy using cortical signals that were stable throughout the 81-week study period. CONCLUSIONS In a person with anarthria and spastic quadriparesis caused by a brain-stem stroke, words and sentences were decoded directly from cortical activity during attempted speech with the use of deep-learning models and a natural-language model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149. opens in new tab.)
Context. Solar activity plays a quintessential role in affecting the interplanetary medium and space weather around Earth. Remote-sensing instruments on board heliophysics space missions provide a pool of information about solar activity by measuring the solar magnetic field and the emission of light from the multilayered, multithermal, and dynamic solar atmosphere. Extreme-UV (EUV) wavelength observations from space help in understanding the subtleties of the outer layers of the Sun, that is, the chromosphere and the corona. Unfortunately, instruments such as the Atmospheric Imaging Assembly (AIA) on board the NASA Solar Dynamics Observatory (SDO), suffer from time-dependent degradation that reduces their sensitivity. The current best calibration techniques rely on flights of sounding rockets to maintain absolute calibration. These flights are infrequent, complex, and limited to a single vantage point, however. Aims. We aim to develop a novel method based on machine learning (ML) that exploits spatial patterns on the solar surface across multiwavelength observations to autocalibrate the instrument degradation. Methods. We established two convolutional neural network (CNN) architectures that take either single-channel or multichannel input and trained the models using the SDOML dataset. The dataset was further augmented by randomly degrading images at each epoch, with the training dataset spanning nonoverlapping months with the test dataset. We also developed a non-ML baseline model to assess the gain of the CNN models. With the best trained models, we reconstructed the AIA multichannel degradation curves of 2010–2020 and compared them with the degradation curves based on sounding-rocket data. Results. Our results indicate that the CNN-based models significantly outperform the non-ML baseline model in calibrating instrument degradation. Moreover, multichannel CNN outperforms the single-channel CNN, which suggests that cross-channel relations between different EUV channels are important to recover the degradation profiles. The CNN-based models reproduce the degradation corrections derived from the sounding-rocket cross-calibration measurements within the experimental measurement uncertainty, indicating that it performs equally well as current techniques. Conclusions. Our approach establishes the framework for a novel technique based on CNNs to calibrate EUV instruments. We envision that this technique can be adapted to other imaging or spectral instruments operating at other wavelengths.
Smart city is a collective term for technologies and concepts that are directed toward making cities efficient, technologically more advanced, greener and more socially inclusive. These concepts include technical, economic and social innovations. This term has been tossed around by various actors in politics, business, administration and urban planning since the 2000s to establish tech-based changes and innovations in urban areas. The idea of the smart city is used in conjunction with the utilization of digital technologies and at the same time represents a reaction to the economic, social and political challenges that post-industrial societies are confronted with at the start of the new millennium. The key focus is on dealing with challenges faced by urban society, such as environmental pollution, demographic change, population growth, healthcare, the financial crisis or scarcity of resources. In a broader sense, the term also includes non-technical innovations that make urban life more sustainable. So far, the idea of using IoT-based sensor networks for healthcare applications is a promising one with the potential of minimizing inefficiencies in the existing infrastructure. A machine learning approach is key to successful implementation of the IoT-powered wireless sensor networks for this purpose since there is large amount of data to be handled intelligently. Throughout this paper, it will be discussed in detail how AI-powered IoT and WSNs are applied in the healthcare sector. This research will be a baseline study for understanding the role of the IoT in smart cities, in particular in the healthcare sector, for future research works.