{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:28:07Z","timestamp":1750220887264,"version":"3.41.0"},"reference-count":0,"publisher":"Association for Computing Machinery (ACM)","issue":"5","license":[{"start":{"date-parts":[[2019,10,1]],"date-time":"2019-10-01T00:00:00Z","timestamp":1569888000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Queue"],"published-print":{"date-parts":[[2019,10]]},"abstract":"<jats:p>Procella is the latest in a long line of data processing systems at Google. What\u2019s unique about it is that it\u2019s a single store handling reporting, embedded statistics, time series, and ad-hoc analysis workloads under one roof. It\u2019s SQL on top, cloud-native underneath, and it\u2019s serving billions of queries per day over tens of petabytes of data. There\u2019s one big data use case that Procella isn\u2019t handling today though, and that\u2019s machine learning. But in \u2019Declarative recursive computation on an RDBMS... or, why you should use a database for distributed machine learning,\u2019 Jankov et al. make the case for the database being the ideal place to handle the most demanding of distributed machine learning workloads.<\/jats:p>","DOI":"10.1145\/3371595.3371598","type":"journal-article","created":{"date-parts":[[2019,11,7]],"date-time":"2019-11-07T13:08:23Z","timestamp":1573132103000},"page":"39-41","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Back under a SQL Umbrella"],"prefix":"10.1145","volume":"17","author":[{"given":"Adrian","family":"Colyer","sequence":"first","affiliation":[{"name":"Accel"}]}],"member":"320","published-online":{"date-parts":[[2019,10]]},"container-title":["Queue"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3371595.3371598","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3371595.3371598","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:44:23Z","timestamp":1750203863000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3371595.3371598"}},"subtitle":["Unifying serving and analytical data; using a database for distributed machine learning"],"short-title":[],"issued":{"date-parts":[[2019,10]]},"references-count":0,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2019,10]]}},"alternative-id":["10.1145\/3371595.3371598"],"URL":"https:\/\/doi.org\/10.1145\/3371595.3371598","relation":{},"ISSN":["1542-7730","1542-7749"],"issn-type":[{"type":"print","value":"1542-7730"},{"type":"electronic","value":"1542-7749"}],"subject":[],"published":{"date-parts":[[2019,10]]},"assertion":[{"value":"2019-10-01","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}