What is Federated Search SharePoint2013?

Usually we used the word Federated search – When we are going the content outside the SharePoint Environment and which is not crawl in our SharePoint Farm.
 For example, federation can provide search results from a web-search provider such as Bing, or perhaps from a private data set that you do not have access to crawl.
Another meaning of Federation:-
Federation can also be a good solution for a geographically distributed organization that wants to provide search access to content at its various locations when each location has its own search index. Because each location provides search results from its own index, it is not necessary to deploy a centralized search service that builds and accesses a single, unified index.

 In this context, federation can provide advantages such as the following:
  • Low bandwidth requirements ─ An organization that is geographically dispersed might not have the high network bandwidth that is required to crawl and index large amounts of remote content. When an organization uses federation, the main data that is transmitted for search across the wide-area network is only a set of search results from each federated content repository.
  • Freshness of search results ─ Each division within an organization can crawl the local content more quickly than a centralized search deployment would be able to crawl all of the content in the entire organization.
  • Divisional search variability ─ When an organization uses federation, each division within the organization can provide and control its own search environment. Each division can tailor search to its own requirements and preferences, with its own user experience and its own search connectors, for example. A centralized search portal would not allow for such differences.
  • Limited size of search indexes ─ A large, geographically distributed organization might have millions of documents. It might not be practical for the organization to have a single, unified search index because of the infrastructure that would be required to support such a large index. Federation enables users in each division to perform a single search to find relevant content that is distributed across multiple smaller search indexes in the organization.


Comparing Federated Search to Content Crawling in Enterprise Search

To help you decide whether to crawl a repository's content directly or by using federated search, you should consider the differences between the two approaches. You must determine which is most appropriate based on the content repository, and your requirements for the search results you want to return. There are advantages to both approaches.



Advantages of crawling content with SharePoint Enterprise Search
By querying the Search service application's content index for search results, you can do the following:
·         Sort results by relevance.
·         Control how frequently the content index is updated.
·         Specify what metadata is crawled.
·         Perform a single backup operation for crawled content.
Advantages of federating content with SharePoint Enterprise Search
By using federated search to return search results:
·         You require no additional capacity requirements for the content index, as content is not crawled by SharePoint Enterprise Search.
·         You can take advantage of a repository’s existing search engine. For example, you can federate to an Internet search engine to search the Web.
·         You can optimize the content repository's search engine for the repository's specific set of content, which might provide better search performance on the content set.
·         You can access repositories that are secured against crawls, but which can be accessed by search queries.


You select this protocol
To get federated search results from this kind of provider
Remote SharePoint
The index of a search service in another SharePoint farm
OpenSearch 1.0/1.1
An external search engine or feed that uses the OpenSearch protocol, such as Bing
Exchange
Exchange Server 2013



Code Sample to Federated Search SQL Server Connector

Added Federated Search (i.e. Bing Search in to SharePoint Search)



Create Content DB at Site Collection Level using Power shell

Create Content DB at Site Collection Level using Power shell


A content database is a database file that stores content for one or more site collections for SharePoint web application. The content can be pages,files,documents,images and much more. So if the Site Collection has more number of SharePoint sites, the content database size grows rapidly.

One SharePoint web application can have more than one content database mapped to it and the max limit is 300 Content DB's per Web Application. If you want to add more content databases to web application, Go to Central Admin -> Application Management -> Manage Content Databases -> select web application -> Add a content database.

Maximum Content database size is 200 GB in general usage scenarios.


Create Content Data Base at Site Collection Level using Power Shell Script

#this line is to reference Microsoft SharePoint PowerShell namespace
#to be able to use SharePoint commands
Add-PSSnapIn Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue;
#Create command required variables
$siteName = "ABC";
$webAppUrl = "http://server:1002";
#You can select any site collection template code by typing Get-SPWebTemplate
$template = "BLANKINTERNET#0";
 $ownerAlias = "domain\username";
$secondaryOwnerAlias = "domain\username";

#this variable is considering that you have a wildcard manged path "departments"
$siteUrl = $webAppUrl + "/sites/"+ $siteName;
$databaseName = $siteName;
$databaseServer = "dbservername";

#Create new content database for DMS department site collection

New-SPContentDatabase -Name $databaseName -DatabaseServer $databaseServer -WebApplication $webAppUrl;

#Create new site collection for DMS department
New-SPSite -Url $siteUrl -OwnerAlias $ownerAlias -SecondaryOwnerAlias $secondaryOwnerAlias -ContentDatabase $databaseName -Template $template -Name $siteName;



#New-SPSite doesn't create default SharePoint security groups
#Create DMS site collection default groups
$web = Get-SPWeb $siteUrl;
# this symbol i:0#.w is presenting windows authentication security mechanism
$web.CreateDefaultAssociatedGroups("i:0#.w|$ownerAlias","i:0#.w|$secondaryOwnerAlias",$siteName);
$web.Update();



Manually also we can achieve.




What is crawl and  how its work in SharePoint 2013 – On Premises Environment

In broad terms, SharePoint Search is comprised of three main functional process components:
  1.      Crawling (Gathering): Collecting content to be processed
  2.            Indexing: Organizing the processed content into a structured/searchable index
  3.      Query Processing: Retrieving a relevant result set relative to a given user query

Type of Crawl in SharePoint 2013

  1.    Full Crawl
  2.    Incremental Crawl
  3.    Continuous Crawl


Advantages
Disadvantages
Continuous Crawl
Work in Parallel mode and maintain the index as current as possible.
Its Only Work for SharePoint Objects
 It doesn’t work with Non-SharePoint object
Incremental / Full  Crawl
It work for both SharePoint and Non-SharePoint Object
Its work in Sequential mode. Unless until first cycle doesn’t complete second can’t start and it wait till first end.
   

Full Crawl:  
Full crawl: - crawls entire content under a content source – IT can be SharePoint Object and Non-SharePoint Object also.

Incremental Crawl:
Incremental crawl: - crawls the content which has been added/modified after last successful crawl.

Comparison between Full and Incremental Crawl


  1. As compared with incremental crawls, full crawls chew up more memory and CPU cycles on the    index.
  2. Full crawls consume more memory and CPU cycles on the Web Front End servers when crawling  content in your farm.
  3. Full crawls use more network bandwidth than incremental crawls.

There are some scenarios where incremental crawl doesn’t work and you need to run full crawl.

Why do we need Full Crawl?

  1. Software updates or service packs installation on servers in the farm.
  2. When an SSP administrator added new managed property.
  3. Crawl rules have been added, deleted, or modified.
  4. Full crawl is required to repair corrupted index. In this case, system may attempt a full crawl     (depending on severity of corruption)
  5. A full crawl of the site has never been done.
  6. To detect security changes those were made on file shares after the last full crawl of the file share.
  7. In case, incremental crawl is failing consecutively. In rare cases, if an incremental crawl fails one hundred consecutive times at any level in a repository, the index server removes the affected content from the index.
  8. To reindex ASPX pages on Windows SharePoint Services 3.0 or Office SharePoint Server 2007     sites. The crawler cannot discover when ASPX pages on Windows SharePoint Services 3.0 or  MOSS sites have changed. Because of this, incremental crawls do not reindex views or home pages when individual list items are deleted.


Full Crawl
Incremental Crawl
Continuous Crawl
Crawl full items
Can be scheduled
Can be stop and paused
When required
Change content access account
Added new manage properties
Content enrichment web service codes change/modified.
Add new I Filter

           Crawl last modified content
           Can be scheduled
           Can be stop and paused
           When required
           Crawl last modified content 

           Index as current as possible.
           Cannot be scheduled
           Cannot be stop and paused (Once started, a "Continuous Crawl" can’t be paused or stopped, you can just disable it.)
           When required
           Content frequently changed (Multiple instance can be run in parallel).
           Only for SharePoint Content Source
           E-commerce site in crass site publishing mode.


Note: You should not pause content source crawls very often or pause multiple content source crawls as every paused crawl consumes memory on index server.



Incremental Crawl Cycle




Continuous Crawl Cycle





Filter based on Managed Metadata using Left Navigation of List or Libraries

Senario:- User have List or Library which is having 100’s record and user used taxonomy for tagging purpose against each record. Now s(he) need to filter the record based on tags in the same List or Library and Unique tags is also more than 20+ It may increase in future to any extent.

Best Posiible solution, I feel:-
Step by Step creation of structure and Configuration with Example:-
Step1:-
user create one term set in the Managed Metadata:-




Step2:-
User create one List and apply the metadata against each records:-




Step3:-
Go to the List Setting -> Metadata Navigation Settings ->Configure Key Filters



Click Ok and Return to your List or Library
Step4:-
Now you will Metadata Field in Left Navigation which will filter the record based on applied tags.



Happy Codding!!


What is Unobtrusive JavaScript ?

Unobtrusive is best practice of writing the JavaScript or JQuery in the decouple or separation format means -

When we write the JavaScript or JQuery - we usually followed the practice of writing the Presentation and behavior in one line only - which is not best practice e.g.




Unobtrusive means decouple or separation of presentation and behavior of object which help in cleaning and maintaining  the code and move out the entire behavior section in separate JS file. e.g.


What is Less.js ?

Less is amazing little tool and followed by reusable and dynamic approach.
Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themable and extendable.

Less.js has feature like :-
1.  Variables
2.  Operations
3. Mixims

We can download Less.js  from : http://lesscss.org/#download-options and Add as reference


Declare Variables:-


Deaclare Operation:-

It just like any programming language
@boxHeight:200px
@boxWidht: @boxHeight*2

Declare Mixins:-

Mixins allow you to embed all the properties of a class into another class by simply including the class name as one of its properties.

.RoundBorders {
  border-radius: 5px;
  -moz-border-radius: 5px;
  -webkit-border-radius: 5px;
}
Make it argument able:-
.RoundBorders (@radius) {
  border-radius: @radius;
  -moz-border-radius: @radius;
  -webkit-border-radius: @radius;
}
header {
  .RoundBorders(4px);
}

.button {
  .RoundBorders(6px);
}

What is CDN Fallback ?


CDN stands for Content Delivery Network.

The benefit of loading libraries like jQuery and Bootstrap from a CDN is that If Server is located multiple geographical location. It read or load the files from the nearest server, which ultimately speed up the page response time. example:-






There are 2 popular CDN exists in the market  
1. Google  
2. Microsoft 

Google CDN:- https://developers.google.com/speed/libraries/


<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script>

Microsoft CDN:- http://www.asp.net/ajax/cdn
<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.9.0.js"></script>
Fallback means :- If CDN server fails then WHAT ?

In that scenario, It should read from your local server i.e.

<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>
if (typeof jQuery == 'undefined') {
    document.write(unescape("%3Cscript src='/js/jquery-2.0.0.min.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>

Or (Another Way) :: Both are same

<script src="//ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>window.jQuery || document.write('<script src="js/jquery-2.0.0.min.js">\x3C/script>')</script>






What is JavaScript , JQuery and DOM ?

In very Short and Simplified manner:-

JavaScript is Language.

JQuery is Library or Framework or Reusable which is built on top of JavaScript.

Basically JQuery works with 3-S i.e. Start ($), Selector (“.” or “#”) and Shot (Action i.e. Click or Val etc.)


 DOM  (Document Object Model ) - It is representation of HTML document  or piece of Web Page or HTML Content exist in the Web page having tags like <Div>, <P>, <Span>, <UI>, <LI> etc..
DOM is like standard object which consist HTML elements properties, HTML elements methods, HTML elements events
DOM - In-Memory representation of HTML
High Level structure of DOM:






   

SharePoint 2013: How to add people search result in to different verticals

Everything Verticals does not show the people result.
Reason System have different result source for different verticals or Navigation.We can define or add multiple result source data under one verticals.
- Everything (Vertical) have default Result Source: i.e. Local SharePoint Results
- People (Vertical)  have default Result source: i.e. Local People Results

Using Query Rule we can add People result into Everything

Steps are as follows:-

Step1:-

Search Center -> Site settings -> Site Collection Administration -> Search Query Rules:

Step2:
 Select Local SharePoint Results (System) in the Context drop-down:



Step3:-
Once that is selected, click on New Query Rule:


 Step4:-
On the Add Query Rule page, add a rule name, remove the condition, and then click on the Add Result Block link:





On the Add Result Block page change the Title to state "People Results". In the Select this Source, select Local People Results (System) and select the amount of items. The default is 2 and that's probably not enough (just like beer or martinis). 3 or 5 seem like good numbers.



Under the Settings section, as shown above, select the "More" link option and enter "peopleresults.aspx?k={subjectTerms}". Select People Item from the Item Display Template. Click OK.



Back on the Add Query Rule page, click Save.

Run your search again (it may take a moment for the changes to appear):





Happy Coding !!