Customer-obsessed science


Research areas
-
September 2, 2025Audible's ML algorithms connect users directly to relevant titles, reducing the number of purchase steps for millions of daily users.
-
-
Featured news
-
VLDB 20252025Cloud service providers usually leverage standard benchmarks such as TPC-H and TPC-DS to evaluate and optimize the performance of cloud data analytic systems. However, these benchmarks have fixed query patterns and are unable to effectively generate statistics of the cloud workloads in production. For example, they cannot simulate the real workload with the similar performance metrics such as CPU Time and
-
ACM CCS 20252025Motivated by applications to efficient secure computation, we consider the following problem of encrypted matrix-vector product (EMVP). Let F be a finite field. In an offline phase, a client uploads an encryption of a matrix M ∈ F^(m×ℓ) to a server, keeping only a short secret key. The server stores the encrypted matrix M̂. In the online phase, the client may repeatedly send encryptions q̂_i of query vectors
-
2025Quantifying uncertainty in black-box LLMs is vital for reliable responses and scalable oversight. Existing methods, which gauge a model's uncertainty through evaluating self-consistency in responses to the target query, can be misleading: an LLM may confidently provide an incorrect answer to a target query, yet give a confident and accurate answer to that same target query when answering a knowledge-preserving
-
EMNLP 2025 Findings2025Large language models (LLMs) often fail to scale their performance on long-context tasks performance in line with the context lengths they support. This gap is commonly attributed to retrieval failures—the models' inability to identify relevant information in the long inputs. Accordingly, recent efforts often focus on evaluating and improving LLMs' retrieval performance: if retrieval is perfect, a model
-
Despite significant advancements in time series forecasting, accurate modeling of time series with strong heterogeneity in magnitude and/or sparsity patterns remains challenging for state of the art deep learning architectures. We identify several factors that lead existing models to systematically under-perform on low magnitude and sparse time series, including loss functions with implicit biases toward
Conferences
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all