How would I go about creating a simple lexical analyzer that takes input files and checks character by character, Identifying each token.
Also I want to use the enum() function. Does someone have an idea how to start the steps
Any tips, ideas, guides, insight, links would be greatly appreciated.
#include <iostream>
#include <fstream>
#include <string>
int main()
{
// "test.txt" presumably contains some characters in it.
std::ifstream f("test.txt");
char ch;
while (f.get(ch))
{
//
// Handle different characters from here:
// http://www.asciitable.com/
//
if (ch == '\n') // Newline
{
std::cout << "Character: Newline\n";
}
elseif (ch == '\t') // Tab
{
std::cout << "Character: Tab\n";
}
else
{
std::cout << "Character: " << ch << '\n';
}
}
}
Input:
This is a sentence.
This is a new sentence on the next line.
Output:
Character: T
Character: h
Character: i
Character: s
Character:
Character: i
Character: s
Character:
Character: a
Character:
Character: s
Character: e
Character: n
Character: t
Character: e
Character: n
Character: c
Character: e
Character: .
Character: Newline
Character: T
Character: h
Character: i
Character: s
Character:
Character: i
Character: s
Character:
Character: a
Character:
Character: n
Character: e
Character: w
Character:
Character: s
Character: e
Character: n
Character: t
Character: e
Character: n
Character: c
Character: e
Character:
Character: o
Character: n
Character:
Character: t
Character: h
Character: e
Character:
Character: n
Character: e
Character: x
Character: t
Character:
Character: l
Character: i
Character: n
Character: e
Character: .