![]() For some implementations, this is a serious security risk.) Quicksort runs very well on some data, but it has a worst-case time complexity of O(n^2). (You have to be a little careful, though. Sorting the vector will cost you O(n log n) assuming that you have floating-point data, but this time complexity's not hiding things like the priority queues were. It's O(1) in amortized time, and there are no management costs, plus the vector is O(n) to be read. What's faster: inserting into a priority queue, or sorting retrospectively?Īs shown above, priority queues can be made efficient, but there are still costs for insertion, removal, and management. These figures imply that while there may be priority queue algorithms which are superior for continuous operation, there is no best choice of algorithm for simply filling and then emptying a priority queue (the operation you're doing). The per element time to initialize and empty the queues were also measured-these tests are very relevant to your question.Īs you can see, the different queues often had very different responses to enqueueing and dequeueing. ![]() Ladder queues and hierarchical heaps performed quite well by this test. The overhead of generating the random numbers was subtracted from the measured times. The top-most element of the queue was then dequeued, incremented by a random value in the range, and then queued. In Hendriks' hold test, a priority queue was seeded with N random numbers in the range. Luengo Hendriks entitled "Revisiting priority queues for image analysis" addresses this question. Edelkamp's book "Heuristic Search: Theory and Applications" has the following handy table showing the time complexity for various priority queue algorithms (remember, priority queues are equivalent to sorting and heap management):Īs you can see, many priority queues have O(log n) costs not just for insertion, but also for extraction, and even queue management! While the coefficient is generally dropped for measuring the time complexity of an algorithm, these costs are still worth knowing.īut all these queues still have time complexities which are comparable. In some cases, you may even be able to quantize floating-point data to take advantage of an O(1) priority queue.Įven in the general case of floating-point data, that O(n log n) is a little misleading. But the machinery of these queues can become complicated: bucket sorts and radix sorts may still provide O(1) operation. ![]() Brown's 1988 publication "Calendar queues: a fast 0 (1) priority queue implementation for the simulation event set problem" offers another solution which deals well with larger ranges of integers - two decades of work following Brown's publication has produced some nice results for doing integer priority queues fast. Beucher and Meyer's 1992 publication "The morphological approach to segmentation: the watershed transformation" describes hierarchical queues, which work quite quickly for integer values with limited range. If you have integer data, there are priority queues which work in O(1) time. Beyond that, there are generalizations.įirst off, priority queues are not necessarily O(n log n). Testing is the best way to answer this question for your specific computer architecture, compiler, and implementation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |