A Beginner's Guide to Searching With Lucene

Written on , by Andrew Lalis.

Nowadays, if you want to build the next fancy new web app, chances are pretty good that you'll need a search bar in it, and for that, you've probably heard of ElasticSearch, or some other fancy, all-in-one solution. However, in this article, I'd like to try and convince you that you don't need any of that, and instead, you can brew up your own homemade search feature using Apache Lucene.

Hopefully you'll be surprised by how easy it is.

The Use Case

Before we dive into the code, it's important to make sure that you actually need an indexing and searching tool that goes beyond simple SQL queries.

If you can answer "yes" to any of these questions, then continue right along:

Indexing and Searching Basics

No matter what searching solution you end up choosing, they all generally follow the same approach:

  1. Ingest data and produce an index.
  2. Search for data quickly using the index.

In most situations, ingesting data roughly translates to scraping content from a database or message queue, or even CSV content. The contents of each entity are analyzed and the important bits are extracted and stored in a compressed format that's optimized for high-speed searching. The exact implementation depends on what sort of solution you choose, but a lot of databases use a sort of red-black tree structure.

Searching over your index involves parsing a user's query (and sanitizing it, if necessary), and then constructing a well-formed query that's accepted by your searching solution, possibly with different weights or criteria applied to different fields.

This is no different for Lucene, and in this guide, we'll go through how to create an index and search through it.


Setting Up a New Project

In this guide, I'll be creating a small Java program for searching over a huge set of airports which is available for free here: https://ourairports.com/data/. The full source code for this project is available on GitHub, if you'd like to take a look.

I'll be using Maven as the build tool of choice, but feel free to use whatever you'd like.

We start by creating a new project, and add the apache-lucene dependency, and the Apache Commons CSV library for parsing the CSV dataset.


				<dependencies>
					<!-- https://mvnrepository.com/artifact/org.apache.lucene/lucene-core -->
					<dependency>
						<groupId>org.apache.lucene</groupId>
						<artifactId>lucene-core</artifactId>
						<version>9.5.0</version>
					</dependency>
					<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-csv -->
					<dependency>
						<groupId>org.apache.commons</groupId>
						<artifactId>commons-csv</artifactId>
						<version>1.10.0</version>
					</dependency>
				</dependencies>
			

Parsing the Data

First of all, we need to parse the CSV data into a programming construct that we can use elsewhere in our code. In this case, I've defined the Airport record like so:


				public record Airport(
						long id,
						String ident,
						String type,
						String name,
						double latitude,
						double longitude,
						Optional<Integer> elevationFt,
						String continent,
						String isoCountry,
						String isoRegion,
						String municipality,
						boolean scheduledService,
						Optional<String> gpsCode,
						Optional<String> iataCode,
						Optional<String> localCode,
						Optional<String> homeLink,
						Optional<String> wikipediaLink,
						Optional<String> keywords
				) {}
			

And a simple AirportParser class that just reads in a CSV file and returns a List<Airport> (Check the source code to see how I did that).

Now that we've got our list of entities, we can build an index from them.

Indexing

In order to efficiently search over a massive set of data, we need to prepare a special set of index files that Lucene can read during searches. To do that, we need to create a new directory for the index to live in, construct a new IndexWriter, and create a Document for each airport we're indexing.


				public static void buildIndex(List<Airport> airports) throws IOException {
					Path indexDir = Path.of("airports-index");
					// We use a try-with-resources block to prepare the components needed for writing the index.
					try (
						Analyzer analyzer = new StandardAnalyzer();
						Directory luceneDir = FSDirectory.open(indexDir)
					) {
						IndexWriterConfig config = new IndexWriterConfig(analyzer);
						config.setOpenMode(IndexWriterConfig.OpenMode.CREATE);
						IndexWriter indexWriter = new IndexWriter(luceneDir, config);
						for (var airport : airports) {
							// Create a new document for each airport.
							Document doc = new Document();
							doc.add(new StoredField("id", airport.id()));
							doc.add(new TextField("ident", airport.ident(), Field.Store.YES));
							doc.add(new TextField("type", airport.type(), Field.Store.YES));
							doc.add(new TextField("name", airport.name(), Field.Store.YES));
							doc.add(new TextField("continent", airport.continent(), Field.Store.YES));
							doc.add(new TextField("isoCountry", airport.isoCountry(), Field.Store.YES));
							doc.add(new TextField("municipality", airport.municipality(), Field.Store.YES));
							doc.add(new IntPoint("elevationFt", airport.elevationFt().orElse(0)));
							doc.add(new StoredField("elevationFt", airport.elevationFt().orElse(0)));
							if (airport.wikipediaLink().isPresent()) {
								doc.add(new StoredField("wikipediaLink", airport.wikipediaLink().get()));
							}
							// And add it to the writer.
							indexWriter.addDocument(doc);
						}
						indexWriter.close();
					}
				}
			
Note that some of the airport's properties are Optional, so we need to be a little careful to not end up with unexpected null values in our documents.

An important takeaway here is the construction of the Document. There are a variety of fields that you could add to your document, which have different effects on the search.

For more information about the types of fields that you can use, check the Lucene documentation. It's very well-written.

Also important to note is that once a document is added, it's staying in the index until either the index is removed or overwritten, or the document is deleted through another IndexWriter method. I'd suggest reading the documentation if you'd like to learn more about how to dynamically update a living index that grows with your data. But for 95% of use cases, regenerating the search index occasionally is just fine.

Searching

Now that we've built an index from our dataset, we can search over it to find the most relevant results for a user's query.

The following code might look a bit daunting, but I've added some comments to explain what's going on, and I'll walk you through the process below.


				public static List<String> searchAirports(String rawQuery) {
					Path indexDir = Path.of("airports-index");
					// If the query is empty or there's no index, quit right away.
					if (rawQuery == null || rawQuery.isBlank() || Files.notExists(indexDir)) return new ArrayList<>();
			
					// Prepare a weight for each of the fields we want to search on.
					Map<String, Float> fieldWeights = Map.of(
							"name", 3f,
							"municipality", 2f,
							"ident", 2f,
							"type", 1f,
							"continent", 0.25f
					);
			
					// Build a boolean query made up of "boosted" wildcard term queries, that'll match any term.
					BooleanQuery.Builder queryBuilder = new BooleanQuery.Builder();
					String[] terms = rawQuery.toLowerCase().split("\\s+");
					for (String term : terms) {
						// Make the term into a wildcard term, where we match any field value starting with the given text.
						// For example, "airp*" will match "airport" and "airplane", but not "airshow".
						// This is usually the natural way in which people like to search.
						String wildcardTerm = term + "*";
						for (var entry : fieldWeights.entrySet()) {
							String fieldName = entry.getKey();
							float weight = entry.getValue();
							Query baseQuery = new WildcardQuery(new Term(fieldName, wildcardTerm));
							queryBuilder.add(new BoostQuery(baseQuery, weight), BooleanClause.Occur.SHOULD);
						}
					}
					Query query = queryBuilder.build();
			
					// Use the query we built to fetch up to 10 results.
					try (var reader = DirectoryReader.open(FSDirectory.open(indexDir))) {
						IndexSearcher searcher = new IndexSearcher(reader);
						List<String> results = new ArrayList<>(10);
						TopDocs topDocs = searcher.search(query, 10, Sort.RELEVANCE, false);
						for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
							Document doc = searcher.storedFields().document(scoreDoc.doc);
							results.add(doc.get("name"));
						}
						return results;
					} catch (IOException e) {
						System.err.println("Failed to search index.");
						e.printStackTrace();
						return new ArrayList<>();
					}
				}
			
  1. We check to make sure that the user's query is legitimate. If it's just empty or null, we can exit right away and return an empty result.
  2. Since we want to make some fields have a greater effect than others, we prepare a mapping that specifies a weight for each field.
  3. In Lucene, the Query object is passed to an index searcher to do the searching. But first, we need to build such a query. In our case, we want to match each term the user enters against any of the fields we've added a weight for. By using a BooleanQuery, we can construct this as a big OR clause, where each term is a wildcard query that's boosted by the weight of the field it applies to.
  4. Finally, we open up a DirectoryReader on the index directory, create an IndexSearcher, and get our results. The searcher produces a TopDocs object that has a scoreDocs property containing the list of document ids that appear in the results. We can use the searcher to lookup the stored fields of each document in the result set, and in this case, we just fetch the name of the airport.

That's it! In my sample project, the whole Lucene implementation for indexing and searching, including imports and comments, is less than 150 lines of pure Java! It's so simple that it can just be tucked away into a single class.

Now, with your newfound knowledge, go forth and build advanced search features into your apps, and be content that you've built your solution from the ground up, without reinventing the wheel or getting roped into a complex cloud solution.

Once again, my sample code is available on GitHub here.

Back to Articles