{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T09:57:56Z","timestamp":1764842276790,"version":"build-2065373602"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2019,8]]},"abstract":"<jats:p>The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense transformer networks, which can learn the shapes and sizes of patches from data. The dense transformer networks employ an encoder-decoder architecture, and a pair of dense transformer modules are inserted into each of the encoder and decoder paths. The novelty of this work is that we provide technical solutions for learning the shapes and sizes of patches from data and efficiently restoring the spatial correspondence required for dense prediction. The proposed dense transformer modules are differentiable, thus the entire network can be trained. We apply the proposed networks on biological image segmentation tasks and show superior performance is achieved in comparison to baseline methods.<\/jats:p>","DOI":"10.24963\/ijcai.2019\/401","type":"proceedings-article","created":{"date-parts":[[2019,7,28]],"date-time":"2019-07-28T07:46:05Z","timestamp":1564299965000},"page":"2894-2900","source":"Crossref","is-referenced-by-count":6,"title":["Dense Transformer Networks for Brain Electron Microscopy Image Segmentation"],"prefix":"10.24963","author":[{"given":"Jun","family":"Li","sequence":"first","affiliation":[{"name":"Washington State University"}]},{"given":"Yongjun","family":"Chen","sequence":"additional","affiliation":[{"name":"Washington State University"}]},{"given":"Lei","family":"Cai","sequence":"additional","affiliation":[{"name":"Washington State University"}]},{"given":"Ian","family":"Davidson","sequence":"additional","affiliation":[{"name":"University of California, Davis"}]},{"given":"Shuiwang","family":"Ji","sequence":"additional","affiliation":[{"name":"Texas A&M University"}]}],"member":"10584","event":{"number":"28","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2019","name":"Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}","start":{"date-parts":[[2019,8,10]]},"theme":"Artificial Intelligence","location":"Macao, China","end":{"date-parts":[[2019,8,16]]}},"container-title":["Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2019,7,28]],"date-time":"2019-07-28T07:49:03Z","timestamp":1564300143000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2019\/401"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2019,8]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2019\/401","relation":{},"subject":[],"published":{"date-parts":[[2019,8]]}}}