Questions on computation precision of DBSCAN
April 27, 2022
1:24 p.m.
Dear community, When I am calling the `sklearn.cluster.DBSCAN` function, I found it may result in huge memory costs... I am trying to reduce the computation cost by having my input data type as np.float16 and using "precomputed" as my metric. But I found that it still uses float64 (as it returns me with some errors like float64 computation leads to memory allocation failure) during computation when `fit_predict` is called. All suggestions for reducing computation costs are highly appreciated. Thanks. All the best, -- Mingzhe HU Columbia University in the City of New York M.S. in Electrical Engineering mingzhe.hu@columbia.edu <mh4116@columbia.edu>
1465
Age (days ago)
1465
Last active (days ago)
0 comments
1 participants
participants (1)
-
Mingzhe Hu