Using contextual hints to improve Solr's autocomplete suggester
Posted by Kelvin on 03 Mar 2012 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
Context-less multi-term autocomplete is difficult.
Given the term "di", we can look at our index and rank terms starting with "di" by frequency and return the n most frequent terms. Solr's TSTLookup and FSTLookup do this very well.
However, given the term "walt di", we can no longer do what we did above for each term and not look silly, especially if the corpus in question is a list of US companies (hint: think mickey mouse". There's little excuse to suggesting "walt discovery" or "walt diners" when our corpus does not contain any documents with that combination of terms.
In the absence of a large number of historical user queries to augment the autocomplete, context is king when it comes to multi-term queries.
The simplest way I can think of doing this, if it is feasible to do so memory-wise, is to store a list of terms and the term that immediately follows it. For example, given the field value "international business machines", mappings would be created for
international=>business
business=>machines
Out-of-order queries wouldn't be supported with this system, nor would term skips (e.g. international machines).
Here's a method fragment that does just this:
HashMultimap<String, String> map = HashMultimap.create(); for (int i = 0; i < reader.numDocs(); ++i) { Fieldable fieldable = reader.document(i).getFieldable(field); if(fieldable == null) continue; String fieldVal = fieldable.stringValue(); if(fieldVal == null) continue; TokenStream ts = a.tokenStream(field, new StringReader(fieldVal)); String prev = null; while (ts.incrementToken()) { CharTermAttribute attr = ts.getAttribute(CharTermAttribute.class); String v = new String(attr.buffer(), 0, attr.length()).intern(); if (prev != null) { map.get(prev).add(v); } prev = v; } }
Guava's Multimap is perfect for this, and Solr already has a Guava dependency, so we might as well make full use of it.
Solr autocomplete with document suggestions
Posted by Kelvin on 03 Mar 2012 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
Solr 3.5 comes with a nice autocomplete/typeahead component that is based on the SolrSpellCheckComponent.
You provide it a query and a field, and the Suggester returns a list of suggestions based on the query. For example:
<?xml version="1.0" encoding="UTF-8"?> <response> <lst name="spellcheck"> <lst name="suggestions"> <lst name="ac"> <int name="numFound">2</int> <int name="startOffset">0</int> <int name="endOffset">2</int> <arr name="suggestion"> <str>acquire</str> <str>accommodate</str> </arr> </lst> <str name="collation">acquire</str> </lst> </lst> </response>
Nice.
Now what if, as part of the autocomplete request, you needed a list of documents that contain the suggested terms for the given field? That's what I'm about to cover here.
TermDocs is your friend
The basic idea here is to call reader.termDocs() for each term, collect the document ids, and use that as the basis of a docslice. Here are relevant bits of code.
AND the doc ids for the various suggestions into a single docset.
NamedList spellcheck = (NamedList) rb.rsp.getValues().get("spellcheck"); NamedList suggestions = (NamedList) spellcheck.get("suggestions"); final SolrIndexReader reader = rb.req.getSearcher().getReader(); OpenBitSet docset = null; for (int i = 0; i < suggestions.size(); ++i) { String name = suggestions.getName(i); if ("collation".equals(name)) continue; NamedList query = (NamedList) suggestions.getVal(i); Set<String> suggestion = (Set<String>) query.get("suggestion"); OpenBitSet docs = collectDocs(field, reader, result); if (docset == null) docset = docs; else { docset.and(docs); } }
collectDocs is implemented here:
private OpenBitSet collectDocs(String field, SolrIndexReader reader, Set<String> terms) throws IOException { OpenBitSet docset = new OpenBitSet(); TermDocs te = reader.termDocs(); for (String s : terms) { Term t = new Term(field, s); te.seek(t); while (te.next()) { docset.set(te.doc()); } } te.close(); return docset; }
Now with the OpenBitSet of document ids matching the suggested terms, you can return a list of documents.
One problem is that you don't have document scores since no search was actually performed. Ideally, you'd want to return the documents in sorted by some field, and use the field value as the score.
Book review of Apache Solr 3 Enterprise Search Server
Posted by Kelvin on 28 Feb 2012 | Tagged as: programming, Lucene / Solr / Elasticsearch / Nutch
Apache Solr 3 Enterprise Search Server published by Packt Publishing is the only Solr book available at the moment.
It's a fairly comprehensive book, and discusses many new Solr 3 features. Considering the breakneck pace of Solr development and the rate at which new features get introduced, you have to hand it to the authors to have released a book which isn't outdated by the time it hits bookshelves.
Nonetheless, it does have shortcomings. I'll cover some of these shortly.
Firstly, the table of contents:
Chapter 1: Quick Starting Solr
Chapter 2: Schema and Text Analysis
Chapter 3: Indexing Data
Chapter 4: Searching
Chapter 5: Search Relevancy
Chapter 6: Faceting
Chapter 7: Search Components
Chapter 8: Deployment
Chapter 9: Integrating Solr
Chapter 10: Scaling Solr
Appendix: Search Quick Reference
A complete TOC with chapter sections is available here: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
The good points
The book does an overall excellent job of covering Solr basics such as the Lucene query syntax, scoring, schema.xml, DIH (dataimport handler), faceting and the various searchcomponents.
There are chapters dedicated to deploying, integrating and scaling Solr, which is nice. i found the Scaling Solr chapter in particular filled with common performance enhancement tips.
The DisMax query parser is covered in great detail, which is good because I've often found it to be a stumbling block for new solr users.
The bad points
Not many, but here are a few gripes.
The 2 most important files a new Solr user needs to understand are schema.xml and solrconfig.xml. There should have been more emphasis placed on them early on. I don't even see solrconfig.xml anywhere in the TOC.
No mention of the Solr admin interface which is an absolute gem for a number of tasks, such as understanding tokenizers. In the text analysis section of Chapter 2, there really should be a walkthrough of Solr Admin's analyzer interface.
I think there could have been at least an attempt at describing the underlying data structure in which documents are stored (inverted index), as well as a basic introduction to the tf.idf scoring model. No mention of this at all in Chapter 5 Search Relevancy. One could argue that this is out of the scope of the book, but if a reader is to arrive at a deep understanding of what Lucene really is, understanding inverted indices and tf.idf is clearly a must.
Summary
All in all, Apache Solr 3 Enterprise Search Server is a book I'd heartily recommend to new or even moderately experienced users of Apache Solr.
It brings together information which is spread throughout the Lucene and Solr wiki and javadocs, making it a handy desk reference.
Apache Solr book review coming soon..
Posted by Kelvin on 27 Feb 2012 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
Just received my review copy of the only Apache Solr book on the market..
http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
My book review to follow shortly..
What's new in Solr 3.4.0
Posted by Kelvin on 06 Oct 2011 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
If you are already using Apache Solr 3.1, 3.2 or 3.3, it's strongly recommended you upgrade to 3.4.0 because of the index corruption bug on OS or computer crash or power loss (LUCENE-3418), now fixed in 3.4.0.
Solr 3.4.0 release highlights include
- Bug fixes and improvements from Apache Lucene 3.4.0, including a
major bug (LUCENE-3418) whereby a Lucene index could
easily become corrupted if the OS or computer crashed or lost
power. - SolrJ client can now parse grouped and range facets results
(SOLR-2523). - A new XsltUpdateRequestHandler allows posting XML that's
transformed by a provided XSLT into a valid Solr document
(SOLR-2630). - Post-group faceting option (group.truncate) can now compute
facet counts for only the highest ranking documents per-group.
(SOLR-2665). - Add commitWithin update request parameter to all update handlers
that were previously missing it. This tells Solr to commit the
change within the specified amount of time (SOLR-2540). - You can now specify NIOFSDirectory (SOLR-2670).
- New parameter hl.phraseLimit speeds up FastVectorHighlighter
(LUCENE-3234). - The query cache and filter cache can now be disabled per request.
See this wiki page
(SOLR-2429). - Improved memory usage, build time, and performance of
SynonymFilterFactory (LUCENE-3233). - Added omitPositions to the schema, so you can omit position
information while still indexing term frequencies (LUCENE-2048). - Various fixes for multi-threaded DataImportHandler.
See the release notes for a more complete list of all the new features, improvements, and bugfixes.
As usual, the download is available here: http://www.apache.org/dyn/closer.cgi/lucene/solr/
Introducing SolrTutorial.com
Posted by Kelvin on 02 Oct 2011 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
Just launched a Solr tutorial website, a site styled after my LuceneTutorial.com but tailored towards Solr users.
It also includes high-level overviews to Solr for non-programmers, such as Solr for Managers and Solr for SysAdmins.
HOWTO: Collect WebDriver HTTP Request and Response Headers
Posted by Kelvin on 22 Jun 2011 | Tagged as: Lucene / Solr / Elasticsearch / Nutch, crawling, programming
WebDriver, is a fantastic Java API for web application testing. It has recently been merged into the Selenium project to provide a friendlier API for programmatic simulation of web browser actions. Its unique property is that of executing web pages on web browsers such as Firefox, Chrome, IE etc, and the subsequent programmatic access of the DOM model.
The problem with WebDriver, though, as reported here, is that because the underlying browser implementation does the actual fetching, as opposed to, Commons HttpClient, for example, its currently not possible to obtain the HTTP request and response headers, which is kind of a PITA.
I present here a method of obtaining HTTP request and response headers via an embedded proxy, derived from the Proxoid project.
ProxyLight from Proxoid
ProxyLight is the lightweight standalone proxy from the Proxoid project. It's released under the Apache Public License.
The original code only provided request filtering, and performed no response filtering, forwarding data directly from the web server to the requesting client.
I made some modifications to intercept and parse HTTP response headers.
Get my version here (released under APL): http://downloads.supermind.org/proxylight-20110622.zip
Using ProxyLight from WebDriver
The modified ProxyLight allows you to process both request and response.
This has the added benefit allowing you to write a RequestFilter which ignores images, or URLs from certain domains. Sweet!
What your WebDriver code has to do then, is:
- Ensure the ProxyLight server is started
- Add Request and Response Filters to the ProxyLight server
- Maintain a cache of request and response filters which you can then retrieve
- Ensure the native browser uses our ProxyLight server
Here's a sample class to get you started
package org.supermind.webdriver; import com.mba.proxylight.ProxyLight; import com.mba.proxylight.Response; import com.mba.proxylight.ResponseFilter; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.firefox.FirefoxProfile; import java.util.LinkedHashMap; import java.util.Map; public class SampleWebDriver { protected int localProxyPort = 5368; protected ProxyLight proxy; // LRU response table. Note: this is not thread-safe. // Use ConcurrentLinkedHashMap instead: http://code.google.com/p/concurrentlinkedhashmap/ private LinkedHashMap<String, Response> responseTable = new LinkedHashMap<String, Response>() { protected boolean removeEldestEntry(Map.Entry eldest) { return size() > 100; } }; public Response fetch(String url) { if (proxy == null) { initProxy(); } FirefoxProfile profile = new FirefoxProfile(); /** * Get the native browser to use our proxy */ profile.setPreference("network.proxy.type", 1); profile.setPreference("network.proxy.http", "localhost"); profile.setPreference("network.proxy.http_port", localProxyPort); FirefoxDriver driver = new FirefoxDriver(profile); // Now fetch the URL driver.get(url); Response proxyResponse = responseTable.remove(driver.getCurrentUrl()); return proxyResponse; } private void initProxy() { proxy = new ProxyLight(); this.proxy.setPort(localProxyPort); // this response filter adds the intercepted response to the cache this.proxy.getResponseFilters().add(new ResponseFilter() { public void filter(Response response) { responseTable.put(response.getRequest().getUrl(), response); } }); // add request filters here if needed // now start the proxy try { this.proxy.start(); } catch (Exception e) { e.printStackTrace(); } } public static void main(String[] args) { SampleWebDriver driver = new SampleWebDriver(); Response res = driver.fetch("http://www.lucenetutorial.com"); System.out.println(res.getHeaders()); } }
Solr 3.2 released!
Posted by Kelvin on 22 Jun 2011 | Tagged as: programming, Lucene / Solr / Elasticsearch / Nutch, crawling
I'm a little slow off the block here, but I just wanted to mention that Solr 3.2 had been released!
Get your download here: http://www.apache.org/dyn/closer.cgi/lucene/solr
Solr 3.2 release highlights include
- Ability to specify overwrite and commitWithin as request parameters when using the JSON update format
- TermQParserPlugin, useful when generating filter queries from terms returned from field faceting or the terms component.
- DebugComponent now supports using a NamedList to model Explanation objects in it's responses instead of Explanation.toString
- Improvements to the UIMA and Carrot2 integrations
I had personally been looking forward to the overwrite request param addition to JSON update format, so I'm delighted about this release.
Great work guys!
Recap: The Fallacies of Distributed Computing
Posted by Kelvin on 01 Mar 2011 | Tagged as: programming, Lucene / Solr / Elasticsearch / Nutch, crawling
Just so no-one forgets, here's a recap of the Fallacies of Distributed Computing
1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology doesn’t change.
6. There is one administrator.
7. Transport cost is zero.
8. The network is homogeneous.
Solandra – Solr running on Cassandra
Posted by Kelvin on 21 Oct 2010 | Tagged as: Lucene / Solr / Elasticsearch / Nutch
Courtest of Nick Lothian..
http://nicklothian.com/blog/2009/10/27/solr-cassandra-solandra/