[email protected]
[Top] [All Lists]

Re: [Haskell-cafe] Tokenizing and Parsec

Subject: Re: [Haskell-cafe] Tokenizing and Parsec
From: Khudyakov Alexey
Date: Tue, 12 Jan 2010 21:33:31 +0300
Ð ÑÐÐÐÑÐÐÐÐ ÐÑ 12 ÑÐÐÐÑÑ 2010 03:35:10 GÃnther Schmidt ÐÐÐÐÑÐÐ:
> Hi all,
> 
> I've used Parsec to "tokenize" data from a text file. It was actually
> quite easy, everything is correctly identified.
> 
> So now I have a list/stream of self defined "Tokens" and now I'm stuck.
> Because now I need to write my own parsec-token-parsers to parse this
> token stream in a context-sensitive way.
> 
> Uhm, how do I that then?
> 
That's pretty easy actually. You can use function `token' to define you own 
primitive parsers. It's defined in Parsec.Prim If I'm correctly remember.

Also you could want to add information about position in the source code to 
you lexems. Here is some code to illustrate usage:

> 
> -- | Language lexem
> data LexemData = Ident String
>                | Number Double
>                | StringLit String
>                | None
>                | EOL
>                  deriving (Show,Eq)
> 
> data Lexem = Lexem { lexemPos  :: SourcePos
>                    , lexemData :: LexemData
>                    }
>              deriving Show
> 
> type ParserLex = Parsec [Lexem] ()
> 
> num :: ParserLex Double
> num = token (show . lexemData) lexemPos (comp . lexemData)
>     where
>       comp (Number x) = Just x
>       comp _          = Nothing
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

<Prev in Thread] Current Thread [Next in Thread>