What is crawl and  how its work in SharePoint 2013 – On Premises Environment

In broad terms, SharePoint Search is comprised of three main functional process components:
  1.      Crawling (Gathering): Collecting content to be processed
  2.            Indexing: Organizing the processed content into a structured/searchable index
  3.      Query Processing: Retrieving a relevant result set relative to a given user query

Type of Crawl in SharePoint 2013

  1.    Full Crawl
  2.    Incremental Crawl
  3.    Continuous Crawl


Advantages
Disadvantages
Continuous Crawl
Work in Parallel mode and maintain the index as current as possible.
Its Only Work for SharePoint Objects
 It doesn’t work with Non-SharePoint object
Incremental / Full  Crawl
It work for both SharePoint and Non-SharePoint Object
Its work in Sequential mode. Unless until first cycle doesn’t complete second can’t start and it wait till first end.
   

Full Crawl:  
Full crawl: - crawls entire content under a content source – IT can be SharePoint Object and Non-SharePoint Object also.

Incremental Crawl:
Incremental crawl: - crawls the content which has been added/modified after last successful crawl.

Comparison between Full and Incremental Crawl


  1. As compared with incremental crawls, full crawls chew up more memory and CPU cycles on the    index.
  2. Full crawls consume more memory and CPU cycles on the Web Front End servers when crawling  content in your farm.
  3. Full crawls use more network bandwidth than incremental crawls.

There are some scenarios where incremental crawl doesn’t work and you need to run full crawl.

Why do we need Full Crawl?

  1. Software updates or service packs installation on servers in the farm.
  2. When an SSP administrator added new managed property.
  3. Crawl rules have been added, deleted, or modified.
  4. Full crawl is required to repair corrupted index. In this case, system may attempt a full crawl     (depending on severity of corruption)
  5. A full crawl of the site has never been done.
  6. To detect security changes those were made on file shares after the last full crawl of the file share.
  7. In case, incremental crawl is failing consecutively. In rare cases, if an incremental crawl fails one hundred consecutive times at any level in a repository, the index server removes the affected content from the index.
  8. To reindex ASPX pages on Windows SharePoint Services 3.0 or Office SharePoint Server 2007     sites. The crawler cannot discover when ASPX pages on Windows SharePoint Services 3.0 or  MOSS sites have changed. Because of this, incremental crawls do not reindex views or home pages when individual list items are deleted.


Full Crawl
Incremental Crawl
Continuous Crawl
Crawl full items
Can be scheduled
Can be stop and paused
When required
Change content access account
Added new manage properties
Content enrichment web service codes change/modified.
Add new I Filter

           Crawl last modified content
           Can be scheduled
           Can be stop and paused
           When required
           Crawl last modified content 

           Index as current as possible.
           Cannot be scheduled
           Cannot be stop and paused (Once started, a "Continuous Crawl" can’t be paused or stopped, you can just disable it.)
           When required
           Content frequently changed (Multiple instance can be run in parallel).
           Only for SharePoint Content Source
           E-commerce site in crass site publishing mode.


Note: You should not pause content source crawls very often or pause multiple content source crawls as every paused crawl consumes memory on index server.



Incremental Crawl Cycle




Continuous Crawl Cycle





Filter based on Managed Metadata using Left Navigation of List or Libraries

Senario:- User have List or Library which is having 100’s record and user used taxonomy for tagging purpose against each record. Now s(he) need to filter the record based on tags in the same List or Library and Unique tags is also more than 20+ It may increase in future to any extent.

Best Posiible solution, I feel:-
Step by Step creation of structure and Configuration with Example:-
Step1:-
user create one term set in the Managed Metadata:-




Step2:-
User create one List and apply the metadata against each records:-




Step3:-
Go to the List Setting -> Metadata Navigation Settings ->Configure Key Filters



Click Ok and Return to your List or Library
Step4:-
Now you will Metadata Field in Left Navigation which will filter the record based on applied tags.



Happy Codding!!


What is Unobtrusive JavaScript ?

Unobtrusive is best practice of writing the JavaScript or JQuery in the decouple or separation format means -

When we write the JavaScript or JQuery - we usually followed the practice of writing the Presentation and behavior in one line only - which is not best practice e.g.




Unobtrusive means decouple or separation of presentation and behavior of object which help in cleaning and maintaining  the code and move out the entire behavior section in separate JS file. e.g.


What is Less.js ?

Less is amazing little tool and followed by reusable and dynamic approach.
Less is a CSS pre-processor, meaning that it extends the CSS language, adding features that allow variables, mixins, functions and many other techniques that allow you to make CSS that is more maintainable, themable and extendable.

Less.js has feature like :-
1.  Variables
2.  Operations
3. Mixims

We can download Less.js  from : http://lesscss.org/#download-options and Add as reference


Declare Variables:-


Deaclare Operation:-

It just like any programming language
@boxHeight:200px
@boxWidht: @boxHeight*2

Declare Mixins:-

Mixins allow you to embed all the properties of a class into another class by simply including the class name as one of its properties.

.RoundBorders {
  border-radius: 5px;
  -moz-border-radius: 5px;
  -webkit-border-radius: 5px;
}
Make it argument able:-
.RoundBorders (@radius) {
  border-radius: @radius;
  -moz-border-radius: @radius;
  -webkit-border-radius: @radius;
}
header {
  .RoundBorders(4px);
}

.button {
  .RoundBorders(6px);
}

What is CDN Fallback ?


CDN stands for Content Delivery Network.

The benefit of loading libraries like jQuery and Bootstrap from a CDN is that If Server is located multiple geographical location. It read or load the files from the nearest server, which ultimately speed up the page response time. example:-






There are 2 popular CDN exists in the market  
1. Google  
2. Microsoft 

Google CDN:- https://developers.google.com/speed/libraries/


<script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js"></script>

Microsoft CDN:- http://www.asp.net/ajax/cdn
<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.9.0.js"></script>
Fallback means :- If CDN server fails then WHAT ?

In that scenario, It should read from your local server i.e.

<script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>
if (typeof jQuery == 'undefined') {
    document.write(unescape("%3Cscript src='/js/jquery-2.0.0.min.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>

Or (Another Way) :: Both are same

<script src="//ajax.aspnetcdn.com/ajax/jquery/jquery-2.0.0.min.js"></script>
<script>window.jQuery || document.write('<script src="js/jquery-2.0.0.min.js">\x3C/script>')</script>






What is JavaScript , JQuery and DOM ?

In very Short and Simplified manner:-

JavaScript is Language.

JQuery is Library or Framework or Reusable which is built on top of JavaScript.

Basically JQuery works with 3-S i.e. Start ($), Selector (“.” or “#”) and Shot (Action i.e. Click or Val etc.)


 DOM  (Document Object Model ) - It is representation of HTML document  or piece of Web Page or HTML Content exist in the Web page having tags like <Div>, <P>, <Span>, <UI>, <LI> etc..
DOM is like standard object which consist HTML elements properties, HTML elements methods, HTML elements events
DOM - In-Memory representation of HTML
High Level structure of DOM:






   

SharePoint 2013: How to add people search result in to different verticals

Everything Verticals does not show the people result.
Reason System have different result source for different verticals or Navigation.We can define or add multiple result source data under one verticals.
- Everything (Vertical) have default Result Source: i.e. Local SharePoint Results
- People (Vertical)  have default Result source: i.e. Local People Results

Using Query Rule we can add People result into Everything

Steps are as follows:-

Step1:-

Search Center -> Site settings -> Site Collection Administration -> Search Query Rules:

Step2:
 Select Local SharePoint Results (System) in the Context drop-down:



Step3:-
Once that is selected, click on New Query Rule:


 Step4:-
On the Add Query Rule page, add a rule name, remove the condition, and then click on the Add Result Block link:





On the Add Result Block page change the Title to state "People Results". In the Select this Source, select Local People Results (System) and select the amount of items. The default is 2 and that's probably not enough (just like beer or martinis). 3 or 5 seem like good numbers.



Under the Settings section, as shown above, select the "More" link option and enter "peopleresults.aspx?k={subjectTerms}". Select People Item from the Item Display Template. Click OK.



Back on the Add Query Rule page, click Save.

Run your search again (it may take a moment for the changes to appear):





Happy Coding !!

SharePoint 2013 Site Collection deletion and restore options


In this post I want to review PowerShell Commands available for managing Site Collections to do operations like delete, restore-deleted and remove-deleted, viz., Remove-SPSite, Restore-SPDeletedSite, Remove-SPDeletedSite. I will also be reviewing differences in deleting Site Collection from Central Admin feature against using PowerShell commands with and without using -gradualdelete option.



Deleting Site Collection from Central Admin:




The Site Collection deletion action from central admin will not permanently delete the Site Collection but it will only be marked as deleted in the Site map table in the Configuration Database.


The Central admin deletion of Site Collection is same as deleting site using PowerShell Command with gradualdelete option.
Example:


If you run the following SQL Query against configuration database you will find the delete transaction id for the deleted site. (Please note that, since is a lab exercise I took liberty to execute query against a SharePoint Database. Querying SharePoint database is not supported by Microsoft)






This deleted Site Collection will only get deleted permanently only when the Timer Job for ‘Gradual Site Delete’ for that specific Web Application is ran as per schedule.



Before the timer-job is ran, the deleted site collection, can be found in the deleted Sites Collections list using the command Get-SPSiteDeleted.





If deleted site still exist in the deleted site list, then it can be restored successfully using Restore-SPDeletedSite





After restore, the Site Collection can be seen back at the same place where it was before deletion.





Remove-SPDeletedSite

Let’s redo the deletion as a -gradualdelete and then perform removing site permanently from the SharePoint Farm (from Site Map).




Verify in Site Map Table in Config Database.

No record of the site


Using 'Remove-SPSite' without using option 'gradualdelete'. (Deleting the Site Permanently).


If site is deleted using PowerShell with Remove-SPSite without gradualdelete option, then the site gets deleted permanently from the farm configuration - it can not be restored.


Remove-SPSite –Identity "http://test.sp2013.corp/Sites/SiteABC"



“Gradual Site Delete” Timer-Job


However the site may have got deleted permanently from the SharePoint Farm, and confirmed as per information in “Site Map” table, and even it can not be restored further, the Content Database only get back the deleted site space only after the ‘Gradual Site Delete’ timer-job is ran.


Ref: http://blogs.technet.com/b/wbaer/archive/2010/08/02/gradual-site-delete-in-sharepoint-2010.aspx