Linnetwalsh2487

From DigitalMaine Transcription Project
Revision as of 22:49, 21 November 2024 by Linnetwalsh2487 (talk | contribs) (Created page with "The wall thickness has a significant influence on the performance parameters of the spherical transducer. The accuracy of the theory is validated by comparing the results with...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The wall thickness has a significant influence on the performance parameters of the spherical transducer. The accuracy of the theory is validated by comparing the results with the experiment and finite element analysis.The development of ultrasonic tweezers with multiple manipulation functions is challenging. In this work, multiple advanced manipulation functions are implemented for a single-probe-type ultrasonic tweezer with the double-parabolic-reflector wave-guided high-power ultrasonic transducer (DPLUS). Due to strong high-frequency (1.49 MHz) linear vibration at the manipulation probe's tip, which is excited by the DPLUS, the ultrasonic tweezer can capture microobjects in a noncontact mode and transport them freely above the substrate. The captured microobjects that adhere to the probe's tip in the low-frequency (154.4 kHz) working mode can be released by tuning the working frequency. The results of the finite-element method analyses indicate that the manipulations are caused by the acoustic radiation force.Structured low-rank (SLR) algorithms, which exploit annihilation relations between the Fourier samples of a signal resulting from different properties, is a powerful image reconstruction framework in several applications. This scheme relies on low-rank matrix completion to estimate the annihilation relations from the measurements. The main challenge with this strategy is the high computational complexity of matrix completion. We introduce a deep learning (DL) approach to significantly reduce the computational complexity. Specifically, we use a convolutional neural network (CNN)-based filterbank that is trained to estimate the annihilation relations from imperfect (under-sampled and noisy) k-space measurements of Magnetic Resonance Imaging (MRI). The main reason for the computational efficiency is the pre-learning of the parameters of the non-linear CNN from exemplar data, compared to SLR schemes that learn the linear filterbank parameters from the dataset itself. Experimental comparisons show that the proposed scheme can enable calibration-less parallel MRI; it can offer performance similar to SLR schemes while reducing the runtime by around three orders of magnitude. Unlike pre-calibrated and self-calibrated approaches, the proposed uncalibrated approach is insensitive to motion errors and affords higher acceleration. The proposed scheme also incorporates image domain priors that are complementary, thus significantly improving the performance over that of SLR schemes.Fully convolutional neural networks have made promising progress in joint liver and liver tumor segmentation. Instead of following the debates over 2D versus 3D networks (for example, pursuing the balance between large-scale 2D pretraining and 3D context), in this paper, we novelly identify the wide variation in the ratio between intra- and inter-slice resolutions as a crucial obstacle to the performance. To tackle the mismatch between the intra- and inter-slice information, we propose a slice-aware 2.5D network that emphasizes extracting discriminative features utilizing not only in-plane semantics but also out-of-plane coherence for each separate slice. Specifically, we present a slice-wise multi-input multi-output architecture to instantiate such a design paradigm, which contains a Multi-Branch Decoder (MD) with a Slice-centric Attention Block (SAB) for learning slice-specific features and a Densely Connected Dice (DCD) loss to regularize the inter-slice predictions to be coherent and continuous. Based on the aforementioned innovations, we achieve state-of-the-art results on the MICCAI 2017 Liver Tumor Segmentation (LiTS) dataset. Besides, we also test our model on the ISBI 2019 Segmentation of THoracic Organs at Risk (SegTHOR) dataset, and the result proves the robustness and generalizability of the proposed method in other segmentation tasks.Photoacoustic endoscopy (PAE), combining both advantages of optical contrast and acoustic resolution, can visualize the chemical-specific optical information of tissues inside human-body. Zunsemetinib Recently, its corresponding reconstruction methods have been extensively researched. However, most of them are limited on cylindrical scan trajectories, rather than a helical scan which is more clinically practical. On this note, this article proposes a methodology of imaging reconstruction and evaluation for helical scan guided PAE. Different from traditional reconstruction method, synthetic aperture focusing technique (SAFT), our method reconstructs image using wavefield extrapolation which significantly improves computational efficiency and even takes only 0.25 seconds for 3-D reconstructions. In addition, the proposed evaluation methodology can estimate the resolutions and deviations of reconstructed images in advance, and then can be used to optimize the PAE scan parameters. Groups of simulations as well as ex-vivo experiments with different scan parameters are provided to fully demonstrate the performance of the proposed techniques. The quantitatively measured angular resolutions and deviations agree well with our theoretical derivation results D√rs2 +h2 / [1.25(rs rd +h2)] (rad) and -h l / (rs rd +h2) (rad), respectively D,rd, rs,h and l represent transducer diameter, radius of scan trajectory, radius of source position, unit helical pitch and the distance from targets to helical scan plane, respectively). This theoretical result also suits for circular and cylindrical scan in case of h = 0 .We present a simple, fully-convolutional model for real-time (> 30 fps) instance segmentation that achieves competitive results on MS COCO evaluated on a single Titan Xp, which is significantly faster than any previous state-of-the-art approach. We accomplish this by breaking instance segmentation into two parallel subtasks (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. We also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty. Finally, by incorporating deformable convolutions into the backbone network, optimizing the prediction head with better anchor scales and aspect ratios, and adding a novel fast mask re-scoring branch, our YOLACT++ model can achieve 34.