boost spirit lexer : outdated tutorial?

I tried compiling the boost.spirit.lex example I found on
http://www.boost.org/doc/libs/1_41_0/libs/spirit/doc/html/spirit/lex/tutorials/lexer_quickstart1.html
(complete example code : http://www.boost.org/doc/libs/1_41_0/libs/spirit/example/lex/word_count_functor.cpp )
but unfortunately it didn't compile. It included deprecated headers, and namespaces didn't match.
I corrected these, but I got several errors which I couldn't correct, as I coulnd't find any up to date tutorials.
Here is the code, I stripped down the comments so it's shorter. You can find the original comments on the link above.
The problematic line is in main(). I got errors there, and several other places inside the lexer implementation.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
#include <boost/config/warning_disable.hpp>

#include <boost/spirit/include/lex.hpp>
#include <boost/bind.hpp>
#include <boost/ref.hpp>

#include <iostream>
#include <string>

namespace lex = boost::spirit::lex;

enum token_ids {
    ID_WORD = 1000, ID_EOL, ID_CHAR
};

template<typename Lexer>
struct word_count_tokens: lex::lexer<Lexer> {
    word_count_tokens() {       
        this->self.add("[^ \t\n]+", ID_WORD) 
        ("\n", ID_EOL) 
        (".", ID_CHAR) 
        ;
    }
};

struct counter {

    typedef bool result_type;

    template<typename Token>
    bool operator()(Token const& t, std::size_t& c, std::size_t& w, std::size_t& l) const {
        switch (t.id()) {
            case ID_WORD: 
                ++w;
                c += t.value().size();
                break;
            case ID_EOL:
                ++l;
                ++c;
                break;
            case ID_CHAR:
                ++c;
                break;
        }
        return true; 
    }
};

int main(int argc, char* argv[]) {
    std::size_t c = 0, w = 0, l = 0;

    std::string str("some text\n and a new line");

    
    word_count_tokens< lex::lexer<> > word_count_functor; //Problematic line, apperantly lex::lexer<>'s template argument isn't defaulted

    char const* first = str.c_str();
    char const* last = &first[str.size()];
    bool r = lex::tokenize(first, last, word_count_functor, boost::bind(counter(), _1, boost::ref(c), boost::ref(w), boost::ref(l)));

    if (r) {
        std::cout << "lines: " << l << ", words: " << w << ", characters: " << c << "\n";
    } else {
        std::string rest(first, last);
        std::cout << "Lexical analysis failed\n" << "stopped at: \"" << rest << "\"\n";
    }
    return 0;
}


I hope somebody uses this library and can help me. Thank you.
Last edited on
I just tried the code from your link. I built 1.41.0 from source to make sure I had the right version.
It compiled without warnings or errors.

By the way, it takes for-friggin'-ever to compile (the example, I mean. Not Boost).
Last edited on
For some reason I was sure I used 1.41. Turns out I only downloaded it. It works now, thanks!

By the way, it takes for-friggin'-ever to compile

The preprocessed code is 7.7 MB, 171000 lines long. O_o

EDIT: google search for "boost lex tutorial" : this thread is the first hit, right before the actual tutorial hehe
Last edited on
Topic archived. No new replies allowed.