Query execution failed with error Ran out of slots in the query output buffer
Running a cpu-only instance (8 cpu, 52gb) I get the following error:
"Query execution failed with error Ran out of slots in the query output buffer"
The query is
select m, m.col1, round(sum(m.col2)) as val from my_table m where m.col2 is not null and m.col2 > 0 and m.col3 in (1) and m.col4 in (17) and 10 + m.col5 * 10000 + m.col6 in (1234) and m.col6 = 231 group by m.col1 order by round(sum(m.col2)) desc limit 1000
"my_table" has 792948 rows.
At the time of error memory_summary showed less than 100 MB used.
The error no longer occurs after restarting omnisci_server.
OmniSQL Version: 5.3.2-20200803-a970373210
Is there a configuration parameter that can help with this?
Is this a recurrent issue? Does it happen every time you ran this query or randomly? Have you tried to use the \cpu_memory command to flush the CPU cache instead of restarting the server?
Anyway, this issue generally happens when the guessing of number of rows returned by a query, so I can only suggest trying to turn off the parameter use-estimator-result-cache (–use-estimator-result-cache=false) as a workaround. It would be helpful to turn on the verbose parameter to give us some info to figure out the problem; I haven’t able to reproduce it right now, so a detailed log could help.
Best Regards, Candido
It happened on a specific query several times before restarting the service and hasn't happened since. I'll try \cpu_memory if/when it does and get detailed logs.
Does disabling use-estimator-result-cache affect performance?
I wasn't aware of the \cpu_memory command, it isn't documented under \h. Is there list somewhere of all omnisql slash commands?
Thanks for your help! Ray
Nope, I mixed the command...I wanted to say \clear_cpu, it should fix a memory fragmentation problem, while in this case, it's unlikely; the command will clear the memory CPU cache, so everything has to be re-read from disk.
To reply to your second question; yes, it could affect the performances because it suppresses the caching of estimator queries, but it's worth using it, if the issue is a recurring one.
The other thing you can try is to disable the estimator' cache with the parameter use-estimator-result-cache, but this will affect the performances; anyway also with this parameter, you can incur this issue, but our engineers are aware of the problem, and they identified some scenarios that could lead to this error, so it's likely that fixes are on the way, while I'm not sure that it ill cover your scenario.
can you describe what you are doing in the system? (modifying data? just querying a read-only database?)
Hi, okay thank you. This omnisci instance is running two databases. One is mainly read only geospatial information and the other one is regularly fed new data that is then joined.
The second database is not especially large (about 30 million rows per table max). However, the query that results in this issue unfortunately has a large number of group by statements. If I leave those out it does seem to work.
If it helps I would happy to send the exact queries to you.
Don't worry, as I wrote to you in private, in the upcoming version there are some fixes to the issue. I am not sure those fixes are covering every possible scenario that would raise the error, but the test case was quite similar to the one you described in the thread.
Please sign in to leave a comment.