metadata: au
metadata raw:
test:
Use space to open navigation items
cx5

Creating a Searchable Site – Part II: 7 Tips for Developers

Following on from last week’s post, here’s our second instalment of ‘How to Create a Searchable Site’. Maintaining and improving a website is a team effort so, this week, we’re sharing our top 6 tips for developers and focusing on simple actions you can take to improve site searchability.

When it comes to creating a highly searchable site, information architecture (IA), structure and content indexing are equally as important as content. While content creators are focused on determining what content to create and how to present it, developers need to ensure that users can find and access it effectively. Searchability of content is paramount; in fact, 55% of users will abandon their online visit if they can’t find information quickly.

In digital, there’s always room for improvement – so if your next priority is creating a highly searchable site, here are our top tips on where to start (remember to also check out Creating a Searchable Site: Part I for the full picture).

1. Keep a tidy website
Search crawlers work by following links – which means your links need to be short and easy to read (or ‘human-readable’) in order to be found. While dynamically generated content is a great personalisation tool, it can quickly clutter indexes with rubbish and make your most useful content harder to find. To avoid this, don’t rely too heavily on dynamically generated web pages and give each page a single, short, human-readable URL. If it makes sense to have links in JavaScript, Flash, PDF or Word, make sure they’re all listed again in a simple HTML site map.

2. Hide the rubbish
As part of keeping your site content in order, make sure that content that isn’t useful stays out of sight for searchers. The easiest way to do this is to prevent these pages from being indexed, through use of <meta name="robots" content="follow,noindex"/> robots metadata directives. An example of non-useful pages could include those that are useful in a browsing context but not as search results, such as A-Z listing pages or mid and low-level index pages, such as headers, footers and the navigation).

3. Keep your status codes in check
Status codes dictate whether a search crawler considers a page valid or not. If your status codes aren’t up to date, users will potentially see pages regardless of whether they contain ‘Broken/Not Found’ messages. To avoid this, make sure your webserver only serves appropriate status codes and that broken URLs return a 404:Not Found status code.

4. Easy on the frames
While framesets are an important tool for telling the browser how a page layout should appear, they can make life very difficult for search engines trying to index pages properly. Funnelback indexes frames and their component pages separately, which means that search results can appear without the context of the frameset. To prevent frames from hindering your site searchability, avoid excessive use of <frameset>s and <frame>s.

5. Remember the robots
Another way to make life easier for search crawlers is to use ROBOTS.TXT files to prevent them from accessing unsuitable search material (such as mirror sites and directories of non-textual data). Excess material increases disk space usage and slows down crawling, indexing and query processing, so keeping your search-useful content honed and clutter free will have a big impact on the quality of your search results.

6. Not all page content is equal
If site navigational text, such as headers and footers, appear in your search result summaries, instead of the main page content, it can make for a poor search experience. (See Controlling indexable content in PADRE for details). Adding directives into web pages to indicate which sections of the page should and shouldn’t be indexed will stop this from happening and ensure searchers only see the most relevant page content. Where pages can’t be modified at the source, try using a NoIndexFilterInjector. (Note: ensure that anchor text is indexed as part of the target document at all times, to ensure that ranking quality is not affected).

7. Gather content from various sources
Funnelback allows you to directly gather and index API generated content from various data sources. The content is usually returned in a structured format such as JSON or XML via a REST style web call. This can be implemented in the Groovy programming language, with support from a number of Funnelback specific libraries.

Searchability has, arguably, the biggest impact on the success of your organisation’s website. When properly designed and executed, a good search experience can decrease bounce rates by an average of 95% and increase engagement rates by as much as 300%.

Funnelback provides some of the most powerful and innovative search capabilities on the market, including the ability to return searches from multiple data sources and advanced algorithms to deliver the most relevant content, every time.

For more information on how you can create a highly searchable website, download our latest eBook, Increase conversions and cut-through with site search.

Back to the top of this page