Schulzboisen5904
Remarkably, this new methodology performs best for thick, highly absorbing samples where traditional spectrophotometry is most challenging and unreliable, offering a promising alternative for quantification of the absorption properties of a range of diverse liquid, and gelatinous-state materials not amenable to conventional methods.In this paper, two novel deep learning methods are proposed for displacement estimation in ultrasound elastography. Although Convolutional Neural Networks (CNN) have been very successful for displacement estimation in computer vision, they have been rarely used for ultrasound elastography. One of the main limitations is that the Radio Frequency (RF) ultrasound data, which is crucial for precise displacement estimation, has vastly different frequency characteristics compared to images in computer vision. Top-rank CNN methods used in computer vision applications are mostly based on a multi-level strategy which estimates finer resolution based on coarser ones. This strategy does not work well for RF data due to its large high frequency content. To mitigate the problem, we propose Modified Pyramid, Warping and Cost volume Network (MPWC-Net) and RFMPWC-Net, both based on PWC-Net, to exploit information in RF data by employing two different strategies. We obtained promising results using networks trained only on computer vision images. In the next step, we constructed a large ultrasound simulation database, and proposed a new loss function to fine-tune the network to improve its performance. The proposed networks and well-known optical flow networks as well as state-of-the-art elastography methods are evaluated using simulation, phantom and in vivo data. Our two proposed networks substantially outperform current deep learning methods in terms of Contrast to Noise Ratio (CNR) and Strain Ratio (SR). Also, the proposed methods perform similar to the state-of-the-art elastography methods in terms of CNR and have better SR by substantially reducing the underestimation bias.Brain network provides essential insights in diagnosing many brain disorders. Integrative analysis of multiple types of connectivity, e.g, functional connectivity (FC) and structural connectivity (SC), can take advantage of their complementary information and therefore may help to identify patients. However, traditional brain network methods usually focus on either FC or SC for describing node interactions and only consider the interaction between paired network nodes. Ilomastat mouse To tackle this problem, in this paper, we propose an Attention-Diffusion-Bilinear Neural Network (ADB-NN) framework for brain network analysis, which is trained in an end-to-end manner. The proposed network seamlessly couples FC and SC to learn wider node interactions and generates a joint representation of FC and SC for diagnosis. Specifically, a brain network (graph) is first defined, where each node corresponding to a brain region is governed by the features of brain activities (i.e., FC) extracted from functional magnetic resonance imaging (fMRI), and the presence of edges is determined by neural fiber physical connections (i.e., SC) extracted from Diffusion Tensor Imaging (DTI). Based on this graph, we train two Attention-Diffusion-Bilinear (ADB) modules jointly. In each module, an attention model is utilized to automatically learn the strength of node interactions. This information further guides a diffusion process that generates new node representations by considering the influence from other nodes as well. After that, the second-order statistics of these node representations are extracted by bilinear pooling to form connectivity-based features for disease prediction. The two ADB modules correspond to the one-step and two-step diffusion, respectively. Experiments on a real epilepsy dataset demonstrate the effectiveness and advantages of our proposed method.Recent advances in deep learning for medical image segmentation demonstrate expert-level accuracy. However, application of these models in clinically realistic environments can result in poor generalization and decreased accuracy, mainly due to the domain shift across different hospitals, scanner vendors, imaging protocols, and patient populations etc. Common transfer learning and domain adaptation techniques are proposed to address this bottleneck. However, these solutions require data (and annotations) from the target domain to retrain the model, and is therefore restrictive in practice for widespread model deployment. Ideally, we wish to have a trained (locked) model that can work uniformly well across unseen domains without further training. In this paper, we propose a deep stacked transformation approach for domain generalization. Specifically, a series of n stacked transformations are applied to each image during network training. The underlying assumption is that the "expected" domain shift for a specion method (degrading 25%), (ii) BigAug is better than "shallower" stacked transforms (i.e. those with fewer transforms) on unseen domains and demonstrates modest improvement to conventional augmentation on the source domain, (iii) after training with BigAug on one source domain, performance on an unseen domain is similar to training a model from scratch on that domain when using the same number of training samples. When training on large datasets (n=465 volumes) with BigAug, (iv) application to unseen domains reaches the performance of state-of-the-art fully supervised models that are trained and tested on their source domains. These findings establish a strong benchmark for the study of domain generalization in medical imaging, and can be generalized to the design of highly robust deep segmentation models for clinical deployment.Automated skin lesion segmentation and classification are two most essential and related tasks in the computer-aided diagnosis of skin cancer. Despite their prevalence, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we propose the mutual bootstrapping deep convolutional neural networks (MB-DCNN) model for simultaneous skin lesion segmentation and classification. This model consists of a coarse segmentation network (coarse-SN), a mask-guided classification network (mask-CN), and an enhanced segmentation network (enhanced-SN). On one hand, the coarse-SN generates coarse lesion masks that provide a prior bootstrapping for mask-CN to help it locate and classify skin lesions accurately. On the other hand, the lesion localization maps produced by mask-CN are then fed into enhanced-SN, aiming to transfer the localization information learned by mask-CN to enhanced-SN for accurate lesion segmentation. In this way, both segmentation and classification networks mutually transfer knowledge between each other and facilitate each other in a bootstrapping way.