- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Greetings,
1. Tokenization is a way to split text into tokens. I would like to do 2 things:
1.1 Tokenize an entire document (the tokens in my case are words and not phrases or letters).
1.2 Remove Stop Words
2. I Saw Cynthia's solutions to count the Frequency of words in a document -> https://communities.sas.com/t5/SAS-Procedures/Frequency-of-Strings/td-p/41378 -> However, I would like to create tokens and not only to count the frequncy of words
3. I don't have SAS Text Miner or SAS Contextual Analysis for this. I would need to use Base SAS for this task.
4. Any code example or ideas will assist.
Thanks!!
D
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
FWIW: There are a number of R packages that do what you want. See, e.g., :
and
Art, CEO, AnalystFinder.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot for the info! But I would like to do this in SAS 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
There is a no longer documented procedure that can identify all of the words and obtain frequency distributions of those words .. which would at least provide a base SAS starting point.
I was going to mention it in my last post, but it no longer worked in SAS 9.4 .. at least on SAS University Edition.
However, I just received a note from someone that it was still working in SAS 9.3.
My only suggestion would be to try it. It's called PROC SPELL. Here is an example of how it can be run:
options caps; filename temp temp; data _null_; file temp; informat sentence $100.; input sentence &; put sentence; cards; Let's see if sas spell procdure can be used to verify whether tha seperate words in this, uhm, flie are, uhm, valid against a stantard internal dictionary and let’s see how versatile it is ; proc spell in=temp nomaster verify; run;
HTH,
Art, CEO, AnalystFinder.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
p.s. Just got confirmation from some of my colleagues that PROC SPELL does indeed work on "regular" versions of SAS 9.3 and 9.4.
For some reason it just isn't working on SAS UE
Art, CEO, AnalystFinder.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thanks again for the assistance and the fast replies.
I've tried PROC SPELL on SAS9.4 and it works. However, it writes the results to report. Do you know by chance if I have a way to write the results to a dataset instead?
Thanks a lot!
D
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Here's one way to separate a document, which I'm assuming you've already imported into SAS.
You can find 'corpus' online that include parts of speech or sentiment and then use those to help classify the words. As @art297 has indicated there's an old proc (unsupported) that can help with this.
I store the code here, which is a bit more than what's below, but if you have a corpus read iin, it may be useful. Hope this helps somewhat.
https://github.com/statgeek/SAS-Tutorials/blob/master/text_analysis
*Create sample data;
data random_sentences;
infile cards truncover;
informat sentence $256.;
input sentence $256.;
cards;
This is a random sentence
This is another random sentence
Happy Birthday
My job sucks.
This is a good idea, not.
This is an awesome idea!
How are you today?
Does this make sense?
Have a great day!
;
;
;
;
*Partition into words;
data f1;
set random_sentences;
id=_n_;
nwords=countw(sentence);
nchar=length(compress(sentence));
do word_order=1 to nwords;
word=scan(sentence, word_order);
output;
end;
run;
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your reply! 🙂 In your opinion, what would be the best way to take the F1 dataset, and tokenize or stem it? I need for example that words "goes"/"going"/"go"/"went" will be under the same concept ("go").
Thanks!
D
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
You need a mapping document, like a 'corpus' that I mentioned, that maps them to the 'root' of the word.
Once you have those documents set up you can merge the data and assign them to the same group.
Some of these mapping documents are open source but many are not.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I don't think SAS would appreciate my posting it here, but I happen to have a copy of the SPELL procedure's documentation.
I'd be glad to answer any of your questions off-line. Send me a note to: art@analystfinder.com
There isn't an output option, but the documentation says to just save the output and use it as input back to SAS or any text editor. Of course, these days you can accomplish that using proc printto.
You can also create your own dictionaries, thus could create a dictionary of stop words, tokens and whatever.
Art, CEO, AnalystFinder.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the information. Are you familiar with a good and effective Corpus which I can use for this Tokenization?
Thanks!
D
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I think the Brown one is considered the best, but I don't think it's free. If you find a free, open source copy, post the link 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Is there any solution to your problem?
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content