博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
lucene4.2自带demo
阅读量:6338 次
发布时间:2019-06-22

本文共 16354 字,大约阅读时间需要 54 分钟。

  hot3.png

lucene是做什么的网上可以搜到很多资料,就不多说了。我想说了有一下几点

1.为什么不直接用数据库而选用lucene

因为lucene是全文搜索引擎,所以它比较擅长从一个词语中反过来找到那个词在哪篇文章中,是反着的,假如用数据,从2000个字中like那个字段效率很低,而lucene通过生成索引反过来的方式,这样可以提高查询的效率。

2.建立索引主要涉及到的方法和类

为了对文档进行索引,Lucene 提供了五个基础的类,他们分别是 Document, Field, IndexWriter, Analyzer, Directory。下面我们分别介绍一下这五个类的用途:

 

Document

 

Document 是用来描述文档的,这里的文档可以指一个 HTML 页面,一封电子邮件,或者是一个文本文件。一个 Document 对象由多个 Field 对象组成的。可以把一个 Document 对象想象成数据库中的一个记录,而每个 Field 对象就是记录的一个字段。

 

Field

 

Field 对象是用来描述一个文档的某个属性的,比如一封电子邮件的标题和内容可以用两个 Field 对象分别描述。

 

Analyzer

 

在一个文档被索引之前,首先需要对文档内容进行分词处理,这部分工作就是由 Analyzer 来做的。Analyzer 类是一个抽象类,它有多个实现。针对不同的语言和应用需要选择适合的 Analyzer。Analyzer 把分词后的内容交给 IndexWriter 来建立索引。

不同的需求需要选择适合自己的分词器分词器请看 http://approximation.iteye.com/blog/345885

 

IndexWriter

 

IndexWriter 是 Lucene 用来创建索引的一个核心的类,他的作用是把一个个的 Document 对象加到索引中来。

 

Directory

 

这个类代表了 Lucene 的索引的存储的位置,这是一个抽象类,它目前有两个实现,第一个是 FSDirectory,它表示一个存储在文件系统中的索引的位置。第二个是 RAMDirectory,它表示一个存储在内存当中的索引的位置。

 

熟悉了建立索引所需要的这些类后,我们就开始对某个目录下面的文本文件建立索引了,清单 1 给出了对某个目录下的文本文件建立索引的源代码。

3.查找所涉及的方法和类

利用 Lucene 进行搜索就像建立索引一样也是非常方便的。在上面一部分中,我们已经为一个目录下的文本文档建立好了索引,现在我们就要在这个索引上进行搜索以找到包含某 个关键词或短语的文档。Lucene 提供了几个基础的类来完成这个过程,它们分别是呢 IndexSearcher, Term, Query, TermQuery, Hits. 下面我们分别介绍这几个类的功能。

 

Query

 

这是一个抽象类,他有多个实现,比如 TermQuery, BooleanQuery, PrefixQuery. 这个类的目的是把用户输入的查询字符串封装成 Lucene 能够识别的 Query。

 

Term

 

Term 是搜索的基本单位,一个 Term 对象有两个 String 类型的域组成。生成一个 Term 对象可以有如下一条语句来完成:Term term = new Term(“fieldName”,”queryWord”); 其中第一个参数代表了要在文档的哪一个 Field 上进行查找,第二个参数代表了要查询的关键词。

例如我删了我数据库里的一条记录我想从索引里删除这条记录

  Term term = new Term("userid",11110); 

  readers.deleteDocuments(term); 

 

TermQuery

 

TermQuery 是抽象类 Query 的一个子类,它同时也是 Lucene 支持的最为基本的一个查询类。生成一个 TermQuery 对象由如下语句完成: TermQuery termQuery = new TermQuery(new Term(“fieldName”,”queryWord”)); 它的构造函数只接受一个参数,那就是一个 Term 对象。

 

IndexSearcher

 

IndexSearcher 是用来在建立好的索引上进行搜索的。它只能以只读的方式打开一个索引,所以可以有多个 IndexSearcher 的实例在一个索引上进行操作。

 

Hits

 

Hits 是用来保存搜索的结果的。

下面是我从lucene官网下载下的例子maven配置如下:

org.apache.lucene
lucene-core
4.2.0
org.apache.lucene
lucene-queries
4.2.0
org.apache.lucene
lucene-analyzers
3.6.2
org.apache.lucene
lucene-analyzers-common
4.2.0
org.apache.lucene
lucene-queryparser
4.2.0

 生成索引的代码

package com.my.lucene2;import org.apache.lucene.analysis.Analyzer;import org.apache.lucene.analysis.standard.StandardAnalyzer;import org.apache.lucene.document.Document;import org.apache.lucene.document.Field;import org.apache.lucene.document.LongField;import org.apache.lucene.document.StringField;import org.apache.lucene.document.TextField;import org.apache.lucene.index.IndexWriter;import org.apache.lucene.index.IndexWriterConfig.OpenMode;import org.apache.lucene.index.IndexWriterConfig;import org.apache.lucene.index.Term;import org.apache.lucene.store.Directory;import org.apache.lucene.store.FSDirectory;import org.apache.lucene.store.RAMDirectory;import org.apache.lucene.util.Version;import org.wltea.analyzer.lucene.IKAnalyzer;import java.io.BufferedReader;import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import java.io.IOException;import java.io.InputStreamReader;import java.util.Date;public class IndexFiles {    private IndexFiles() {}  public static void main(String[] args) {	  	 // 生成索引的位置    String indexPath = "D:\\test\\bb\\index";    // 将要生成索引的文件目录    String docsPath = "D:\\test\\aa\\";    final File docDir = new File(docsPath);    if (!docDir.exists() || !docDir.canRead()) {      System.exit(1);    }        Date start = new Date();    try {      System.out.println("Indexing to directory '" + indexPath + "'...");      //Directory rdir =new  RAMDirectory(); 把建立的索引放入内存      // 建立磁盘索引放入磁盘      Directory dir = FSDirectory.open(new File(indexPath));     // Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_42);  lucene自带的标准分词      Analyzer analyzer = new IKAnalyzer(); //  二元ik分词      // 配置建立索引      IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_42, analyzer);      boolean create = true;      if (create) {        //新创建索引        iwc.setOpenMode(OpenMode.CREATE);      } else {        // 增加索引        iwc.setOpenMode(OpenMode.CREATE_OR_APPEND);      }      // Optional: for better indexing performance, if you      // are indexing many documents, increase the RAM      // buffer.  But if you do this, increase the max heap      // size to the JVM (eg add -Xmx512m or -Xmx1g):      //      // iwc.setRAMBufferSizeMB(256.0);      // 创建索引对象      IndexWriter writer = new IndexWriter(dir, iwc);      indexDocs(writer, docDir);      // NOTE: if you want to maximize search performance,      // you can optionally call forceMerge here.  This can be      // a terribly costly operation, so generally it's only      // worth it when your index is relatively static (ie      // you're done adding documents to it):      //      // writer.forceMerge(1);      writer.close();      Date end = new Date();      System.out.println(end.getTime() - start.getTime() + " total milliseconds");    } catch (IOException e) {      System.out.println(" caught a " + e.getClass() +       "\n with message: " + e.getMessage());    }  }  static void indexDocs(IndexWriter writer, File file)    throws IOException {    // do not try to index files that cannot be read    if (file.canRead()) {      if (file.isDirectory()) {        String[] files = file.list();        // an IO error could occur        if (files != null) {          for (int i = 0; i < files.length; i++) {            indexDocs(writer, new File(file, files[i]));          }        }      } else {        FileInputStream fis;        try {          fis = new FileInputStream(file);        } catch (FileNotFoundException fnfe) {          // at least on windows, some temporary files raise this exception with an "access denied" message          // checking if the file can be read doesn't help          return;        }        try {          // make a new, empty document          Document doc = new Document();          // Add the path of the file as a field named "path".  Use a          // field that is indexed (i.e. searchable), but don't tokenize           // the field into separate words and don't index term frequency          // or positional information:          Field pathField = new StringField("path", file.getPath(), Field.Store.YES);          System.out.println("sss " + pathField);          doc.add(pathField);          // Add the last modified date of the file a field named "modified".          // Use a LongField that is indexed (i.e. efficiently filterable with          // NumericRangeFilter).  This indexes to milli-second resolution, which          // is often too fine.  You could instead create a number based on          // year/month/day/hour/minutes/seconds, down the resolution you require.          // For example the long value 2011021714 would mean          // February 17, 2011, 2-3 PM.          doc.add(new LongField("modified", file.lastModified(), Field.Store.NO));          // Add the contents of the file to a field named "contents".  Specify a Reader,          // so that the text of the file is tokenized and indexed, but not stored.          // Note that FileReader expects the file to be in UTF-8 encoding.          // If that's not the case searching for special characters will fail.          doc.add(new TextField("contents", new BufferedReader(new InputStreamReader(fis, "gbk"))));          doc.add(new StringField("test", "雪含心", Field.Store.YES));          // doc.add 我们可以根究需求去增加字段例如我以增加一个userid,这样没删除一个user我就能更新索引          if (writer.getConfig().getOpenMode() == OpenMode.CREATE) {            // New index, so we just add the document (no old document can be there):            System.out.println("adding " + file);            writer.addDocument(doc);          } else {            // Existing index (an old copy of this document may have been indexed) so             // we use updateDocument instead to replace the old one matching the exact             // path, if present:            System.out.println("updating " + file);            writer.updateDocument(new Term("path", file.getPath()), doc);          }                  } finally {          fis.close();        }      }    }  }}

 搜索的代码

package com.my.lucene2;/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements.  See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License.  You may obtain a copy of the License at * *     http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */import java.io.BufferedReader;import java.io.File;import java.io.FileInputStream;import java.io.IOException;import java.io.InputStreamReader;import java.util.Date;import org.apache.lucene.analysis.Analyzer;import org.apache.lucene.analysis.standard.StandardAnalyzer;import org.apache.lucene.document.Document;import org.apache.lucene.index.DirectoryReader;import org.apache.lucene.index.IndexReader;//import org.apache.lucene.index.StoredDocument;import org.apache.lucene.queryparser.classic.QueryParser;import org.apache.lucene.search.IndexSearcher;import org.apache.lucene.search.Query;import org.apache.lucene.search.ScoreDoc;import org.apache.lucene.search.TopDocs;import org.apache.lucene.store.FSDirectory;import org.apache.lucene.util.Version;import org.wltea.analyzer.lucene.IKAnalyzer;/** Simple command-line based search demo. */public class SearchFiles {  private SearchFiles() {}  /** Simple command-line based search demo. */  public static void main(String[] args) throws Exception {    String usage =      "Usage:\tjava org.apache.lucene.demo.SearchFiles [-index dir] [-field f] [-repeat n] [-queries file] [-query string] [-raw] [-paging hitsPerPage]\n\nSee http://lucene.apache.org/core/4_1_0/demo/ for details.";    if (args.length > 0 && ("-h".equals(args[0]) || "-help".equals(args[0]))) {      System.out.println(usage);      System.exit(0);    }    String index = "D:\\test\\bb\\index\\";    String field = "contents";    // 查找的字段名    String queries = "D:\\test\\bb\\index\\bb.txt";    int repeat = 0;    boolean raw = false;    String queryString = null;    int hitsPerPage = 10;    // 分页用    // 读索引    IndexReader reader = DirectoryReader.open(FSDirectory.open(new File(index)));    IndexSearcher searcher = new IndexSearcher(reader);    // 标准分词    //Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_42);    Analyzer analyzer =new IKAnalyzer();    // 从一个文本里读出我们要搜索的内容,这个也可以写成死的    BufferedReader in = null;    if (queries != null) {    	File file = new File(queries);      in = new BufferedReader(new InputStreamReader(new FileInputStream(queries), "gbk"));    } else {      in = new BufferedReader(new InputStreamReader(System.in, "gbk"));    }    // 生成解析器    QueryParser parser = new QueryParser(Version.LUCENE_42, field, analyzer);    while (true) {      if (queries == null && queryString == null) {                        // prompt the user        System.out.println("Enter query: ");      }     // 读出写入的要查找的内容赋值给queryString      String line = queryString != null ? queryString : in.readLine();      if (line == null || line.length() == -1) {        break;      }      line = line.trim();      if (line.length() == 0) {        break;      }      // 查找      Query query = parser.parse(line);      System.out.println("Searching for: " + query.toString(field));       // 如果repeate大于0取出查出结果的前100条数据 这个没有意义,demo里面这么写的      if (repeat > 0) {                           // repeat & time as benchmark        Date start = new Date();        for (int i = 0; i < repeat; i++) {          searcher.search(query, null, 100);        }        Date end = new Date();        System.out.println("Time: "+(end.getTime()-start.getTime())+"ms");      }      doPagingSearch(in, searcher, query, hitsPerPage, raw, queries == null && queryString == null);      if (queryString != null) {        break;      }    }    reader.close();  }  /**   * This demonstrates a typical paging search scenario, where the search engine presents    * pages of size n to the user. The user can then go to the next page if interested in   * the next hits.   *    * When the query is executed for the first time, then only enough results are collected   * to fill 5 result pages. If the user wants to page beyond this limit, then the query   * is executed another time and all hits are collected.   *    */  public static void doPagingSearch(BufferedReader in, IndexSearcher searcher, Query query,                                      int hitsPerPage, boolean raw, boolean interactive) throws IOException {     // Collect enough docs to show 5 pages    TopDocs results = searcher.search(query, 5 * hitsPerPage);    // 查找出来的所有文档    ScoreDoc[] hits = results.scoreDocs;    // 总条数    int numTotalHits = results.totalHits;    System.out.println(numTotalHits + " total matching documents");    int start = 0;    int end = Math.min(numTotalHits, hitsPerPage);            while (true) {      if (end > hits.length) {        System.out.println("Only results 1 - " + hits.length +" of " + numTotalHits + " total matching documents collected.");        System.out.println("Collect more (y/n) ?");        String line = in.readLine();        if (line.length() == 0 || line.charAt(0) == 'n') {          break;        }                hits = searcher.search(query, numTotalHits).scoreDocs;      }            end = Math.min(hits.length, start + hitsPerPage);            for (int i = start; i < end; i++) {        if (raw) {                              // output raw format          System.out.println("doc="+hits[i].doc+" score="+hits[i].score);          continue;        }        Document doc = searcher.doc(hits[i].doc);        // 查找到匹配的文档        String path = doc.get("path");        // print the filed 雪含心        System.out.println("the content is ....." + doc.get("test"));        if (path != null) {          System.out.println((i+1) + ". " + path);          String title = doc.get("title");          if (title != null) {            System.out.println("   Title: " + doc.get("title"));          }        } else {          System.out.println((i+1) + ". " + "No path for this document");        }                        }      if (!interactive || end == 0) {        break;      }      if (numTotalHits >= end) {        boolean quit = false;        while (true) {          System.out.print("Press ");          if (start - hitsPerPage >= 0) {            System.out.print("(p)revious page, ");            }          if (start + hitsPerPage < numTotalHits) {            System.out.print("(n)ext page, ");          }          System.out.println("(q)uit or enter number to jump to a page.");                    String line = in.readLine();          if (line.length() == 0 || line.charAt(0)=='q') {            quit = true;            break;          }          if (line.charAt(0) == 'p') {            start = Math.max(0, start - hitsPerPage);            break;          } else if (line.charAt(0) == 'n') {            if (start + hitsPerPage < numTotalHits) {              start+=hitsPerPage;            }            break;          } else {            int page = Integer.parseInt(line);            if ((page - 1) * hitsPerPage < numTotalHits) {              start = (page - 1) * hitsPerPage;              break;            } else {              System.out.println("No such page");            }          }        }        if (quit) break;        end = Math.min(numTotalHits, start + hitsPerPage);      }    }  }}

 这个代码的流程是从一个目录下读出所有的文件然后建立索引,再从一个文件里读出一个词去搜索,建立索引也可以从数据库里读取信息

总结:我对lucene了解不是多么深入,希望加深对lucene的了解,以后学学solr,lucene的应用场景和分词技巧还是要好好研究的

 

转载于:https://my.oschina.net/zaxb/blog/1544115

你可能感兴趣的文章
Pig的输入输出及foreach,group关系操作
查看>>
TechParty - Code For Public - sz
查看>>
emacs 前端插件推荐[emmet-mode]
查看>>
dnsmasq配置文件
查看>>
Unity链接SqlServer数据库并进行简单的数据查询
查看>>
23种设计模式
查看>>
原生javascript学习:用循环改变div颜色
查看>>
ABBYY FineReader 12内置的自动化任务
查看>>
ab 测试 和 apache 修改 并发数 mpm
查看>>
Nginx 的软件负载均衡详解
查看>>
TIMED OUT WAITING FOR OHASD MONITOR
查看>>
过滤器
查看>>
入门技术管理者的思考
查看>>
Html与CSS快速入门02-HTML基础应用
查看>>
Tr A
查看>>
poj 3185 The Water Bowls
查看>>
《需求工程——软件建模与分析》读书笔记三
查看>>
常用HTTP状态码备忘
查看>>
资源合集
查看>>
MongoDB学习笔记(四) 用MongoDB的文档结构描述数据关系
查看>>