Abstract: Transformer-based object detection models usually adopt an encoding-decoding architecture that mainly combines self-attention (SA) and multilayer perceptron (MLP). Although this architecture ...
Abstract: In this study, we aim to perform a high-precision domain transformation of data captured in any environment, including unfavorable ones, into data captured in a typical environment. This ...