• Does cheating produce faster searches?

    From Luc@luc@sep.invalid to comp.lang.tcl on Wed Sep 25 17:01:49 2024
    From Newsgroup: comp.lang.tcl

    Suppose I have a large list. Very large list. Then I want to search
    for an item in that string:

    % lsearch $list "string"

    Now, suppose I have many lists instead. One list contains all the items
    that begin with the letter a, another list contains all the items that
    begin with the letter b, another list contains all the items that begin
    with the letter c, and so on. Then I see what the first character in
    "string" is and only search for it in the one corresponding list.
    Would that be faster? I somehow suspect the answer is 'no.'

    Bonus question: what about sqlite operations? Would they be faster if
    I had one separate table for each initial letter/character?

    TIA
    --
    Luc


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rich@rich@example.invalid to comp.lang.tcl on Wed Sep 25 20:36:23 2024
    From Newsgroup: comp.lang.tcl

    Luc <luc@sep.invalid> wrote:
    Suppose I have a large list. Very large list. Then I want to search
    for an item in that string:

    % lsearch $list "string"

    Now, suppose I have many lists instead. One list contains all the items
    that begin with the letter a, another list contains all the items that
    begin with the letter b, another list contains all the items that begin
    with the letter c, and so on. Then I see what the first character in "string" is and only search for it in the one corresponding list.

    What you have described is, at a very high level, similar to how B-Tree indexes work.

    https://en.wikipedia.org/wiki/B-tree

    And they are the common index type used in databases for faster data retreival.

    Would that be faster? I somehow suspect the answer is 'no.'

    The answer is actually "it depends". You'd have to define "very
    large". For some value of "very large" the extra time spent to pick
    which sublist to search will most likely be faster than doing a search
    across a single list containing all the items.

    But, as is always the case, the algorithm matters.

    For basic 'lsearch', it does, by default, a linear search. It starts
    at index 0, and looks at each item sequentially until it finds a match,
    or hits the end of the list. For this algorithm, your "separate based
    on first character" method will be faster for a not-so-big value of
    "very large".

    But, lsearch also has the "-sorted" option. For a "very large" list
    that you can maintain in sorted order, using "-sorted" causes lsearch
    to perform a binary search upon the list.

    https://en.wikipedia.org/wiki/Binary_search

    For this alternate algorithm, your "separate lists" method will need a
    much larger value for "very large" vs. the linear search without
    "-sorted" before the "separate lists" are faster.

    However, "-sorted" now leaves you with the need to have "very large
    list" be sorted. And the time needed to sort the list, for a "very
    large" list, could be substantial. So you'd need, in this case, a way
    to either only sort once, but use many times, or if the list is being modified, you'd want to be careful to modify it such that it is still
    sorted vs. needing to "sort" it over and over again.

    Bonus question: what about sqlite operations? Would they be faster if
    I had one separate table for each initial letter/character?

    Again, it depends.

    If you have no indexes on your tables, then likely yes.

    If you have indexes on the tables, that can also be utilized by the
    query you are running, then the size of the table will need to be much
    much larger before the "split up" version might become faster.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Shaun Deacon@sdeacon@us.socionext.com to comp.lang.tcl on Thu Sep 26 18:46:36 2024
    From Newsgroup: comp.lang.tcl

    Luc wrote:
    Suppose I have a large list. Very large list. Then I want to search
    for an item in that string:

    % lsearch $list "string"

    Now, suppose I have many lists instead. One list contains all the items
    that begin with the letter a, another list contains all the items that
    begin with the letter b, another list contains all the items that begin
    with the letter c, and so on. Then I see what the first character in
    "string" is and only search for it in the one corresponding list.
    Would that be faster? I somehow suspect the answer is 'no.'

    Bonus question: what about sqlite operations? Would they be faster if
    I had one separate table for each initial letter/character?

    TIA


    As you've probably discovered lsearch can be slow for huge lists, and if you're doing basic searches on a big list a lot, it can be noticeable.

    Depending very much on what you want to do with the result from your
    search, and speed is your primary concern, there may be another way to approach this...

    If you're just checking for whether a word exists in some word list,
    have you considered creating array variables ?

    foreach word $bigWordList {
    set words($word) 1
    }
    ...
    set string "foo"
    if { [info exists words($string)] } {
    puts "$string is in my list"
    } else {
    puts "$string is not in my list"
    }

    The above test would be quick and you just build the initial 'words'
    array once (and then add new words by setting new variables). Of course,
    "set words($word) xxx" could be set to something other than 1...
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Luc@luc@sep.invalid to comp.lang.tcl on Thu Sep 26 23:48:49 2024
    From Newsgroup: comp.lang.tcl

    On Thu, 26 Sep 2024 18:46:36 -0700, Shaun Deacon wrote:

    Depending very much on what you want to do with the result from your
    search, and speed is your primary concern, there may be another way to >approach this...

    If you're just checking for whether a word exists in some word list,
    have you considered creating array variables ?


    Right now, at this very exact moment, I am toying with a real time
    search box, and by "real time" I mean the search output changes with
    every new character typed into the user input widget. But I'm always
    searching for all kinds of stuff when I code. It's a very common need.

    Interesting idea with the array variables. Thank you for your input.
    --
    Luc


    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Rich@rich@example.invalid to comp.lang.tcl on Fri Sep 27 12:44:50 2024
    From Newsgroup: comp.lang.tcl

    Luc <luc@sep.invalid> wrote:
    On Thu, 26 Sep 2024 18:46:36 -0700, Shaun Deacon wrote:

    Depending very much on what you want to do with the result from your >>search, and speed is your primary concern, there may be another way to >>approach this...

    If you're just checking for whether a word exists in some word list,
    have you considered creating array variables ?


    Right now, at this very exact moment, I am toying with a real time
    search box, and by "real time" I mean the search output changes with
    every new character typed into the user input widget. But I'm always searching for all kinds of stuff when I code. It's a very common need.

    Interesting idea with the array variables. Thank you for your input.

    Do note that reality is that the method that turns out to be "fastest"
    is a "it depends" (and upon several variables, including at least the
    size of the data you want to search).

    To determine which is actually fastest, you need your data set, and you
    need to do some experiments with all the default options using Tcl's
    [time] command to determine how long each takes.

    And, if you really want to test, you also need to test against
    alternate methods of storage and search to see if one of those
    alternates is actually faster. Given the 'hint' you've given above,
    you might find that something like a prefix trie (https://en.wikipedia.org/wiki/Trie), esp. one built as a C extension,
    is faster than any of the default Tcl built in operators. You are
    crossing over the boundary with this post from "general purpose search,
    with reasonable performance" into "specialized searching, for fastest performance". And the realm of specialization in this area can be both
    wide and deep.

    --- Synchronet 3.20a-Linux NewsLink 1.114