Sunday, 14 July 2019

Serialization in Java

It is the process of writing the state of the object to a byte stream. This is useful when you want to save the state of your program to persistent storage area such as a file.At later time you mayrestore these objects by using the process of deserialization.

Objects only that implement the "Serialization interface" can be saved and restored. If we define the object as Transient then we cannot serialize the object.

Recommended learning.

I recommend you to go through the difference between the Transient and  volatile.

Externalizable.

By Default the Serialization happens automatically, when the user wants to have control over this process then there comes this externalizable in picture.

All through the process inStream is the byte stream from which the object is to be read, and
outStream is the byte stream to which the object is to be written

If we want to achieve the serialization below are important.

Object Output.
This is called to serialize the object.

Object Output stream.
This is the class responsible for the writing the objects in to the stream.

Object Input.
This is called to deserialize the Object.

Object Input Stream. 
It is responsible for reading the Objects from a stream.

By Using all of this concepts I have written a small program where we can understand this serialization.

In this  I have created the SerializationExample as Vo class by implementing the Serializable interface .

In the MyClass I am writing this List<SerializationExample> Object to the file and reading from the file.

We can discuss about the complex implementations in the upcoming posts.

package com.searchendeca.sample;

import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;

public class SerializationExample implements Serializable {
transient String s;
int i;
public SerializationExample(String s1,int i1) {
this.s=s1;
this.i=i1;
}
public String toString() {
return "s=" + s + "; i=" + i;
}

}

class MyClass{

public static void main(String args[]) {
//Serialize the object

try {
List<SerializationExample> listObj = new ArrayList<>();
ObjectOutputStream outObj = new ObjectOutputStream(new FileOutputStream("C:\\Users\\Syed\\Sample\\src\\com\\searchendeca\\thread\\sample\\serial.txt"));
SerializationExample se = new SerializationExample("Syed Ghouse", 1);
SerializationExample se1 = new SerializationExample("Syed", 2);
listObj.add(se);
listObj.add(se1);
outObj.writeObject(listObj);
}catch(IOException ie) {
ie.printStackTrace();
}

//Deserialize the object

ObjectInputStream inObj = null;
List<SerializationExample> dse = null;
try {
inObj = new ObjectInputStream(new FileInputStream("C:\\Users\\Syed\\Sample\\src\\com\\searchendeca\\thread\\sample\\serial.txt"));
dse = (List<SerializationExample>) inObj.readObject();
} catch (ClassNotFoundException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
dse.stream().forEachOrdered(System.out::println);
}
}

Output:

s=null; i=1
s=null; i=2

Saturday, 13 July 2019

Thread Concepts

To Find the Thread is alive or not, we can use the below method.

final boolean isAlive() 
To determine whether a thread has finished, first you call isAlive() on thread, that will more commonly use to wait for a thread to finish is called join().

final void join() throws Interrupted Exception.
This will make the thread to finish its process.

notify()
wakes up thread that called wait() on the same object.

notify all()
wakes up all the thread that called wait on the same object.

What is a Race Condition in Thread.

A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don't know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are "racing" to access/change the data.

Below is the example for the Race Conditions in the thread

package com.searchendeca.thread.sample;

class MultiThreadNonSyncDemo {


//For Sync Use the Synchronized Keyword Here
 void call(String pName){
System.out.print("[" + pName);
try {
Thread.sleep(200);
} catch (InterruptedException e) {
System.out.println("Interrupted");
e.printStackTrace();
}
System.out.println("]");
}

}
class Caller implements Runnable {
String msg;
MultiThreadNonSyncDemo target;
Thread t;
public Caller(MultiThreadNonSyncDemo targ, String s) {
target = targ;
msg = s;
if(s.equals("Hello3")) {
t = new Thread(this);
t.start();
t.setPriority(1);
}
else {
t = new Thread(this);
t.start();
}
}
public void run() {
target.call(msg);

}
}
class MainMultiSyncJoin {

public static void main(String args[]) {
MultiThreadNonSyncDemo target = new MultiThreadNonSyncDemo();
Caller m1= new Caller(target, "Hello1");
Caller m2= new Caller(target, "Hello2");
Caller m3= new Caller(target, "Hello3");
try {

m1.t.join();
m2.t.join();
m3.t.join();
System.out.println("**One is Alive::"+m1.t.isAlive());
System.out.println("**Two is Alive:"+m2.t.isAlive());
System.out.println("**Three is Alive:"+m3.t.isAlive());
}catch(InterruptedException IE) {
IE.printStackTrace();
}
}
}

Output:

[Hello1[Hello2[Hello3]
]
]
**One is Alive::false
**Two is Alive:false
**Three is Alive:false


Here all the three threads start and execute and wanting to prove among them self.

synchronized: This comes to play as a solution for the race conditions.

To avoid the race condition we can use the below two methods .

1. using the synchronized methods.
2. using synchronized statements.

1. using the synchronized methods.
//For Sync Use the Synchronized Keyword Here
synchronized void call(String pName){
System.out.print("[" + pName);
try {
Thread.sleep(200);
} catch (InterruptedException e) {
System.out.println("Interrupted");
e.printStackTrace();
}
System.out.println("]");
}

Update the following snippet int the above program.

Output:

[Hello1]
[Hello3]
[Hello2]
**One is Alive::false
**Two is Alive:false
**Three is Alive:false

2. using synchronized statements.
In the above demo program defined alter the run method like this.Update the following snippet int the above program.

public void run() {
synchronized (target) {
target.call(msg);
}

Output:

[Hello1]
[Hello3]
[Hello2]
**One is Alive::false
**Two is Alive:false
**Three is Alive:false

Thread Priorities

A thread priority is used to decide when to switch from one running thread to the next. This is called as Context switch.

There are certain rules, where the thread can context switch.

1. A thread can voluntarily relinquish control.
2.A thread can be preempted(Serve) by higher priority thread.

We can set the thread a priority which means that it will not execute in higher speed, when compared to the low priority thread it means threads will be utilizing the CPU based on the higher priority.

Also thread priories will be used by the Thread Scheduler to decide when each thread should be allowed to run.  In Theory, over a given period of time higher priority thread gets more CPU time than the lower priority threads.

Priority can range from 1 to 10.
min_priority to max_priority. To return a thread to default priority, specify it as normal_prioerity which is currently 5.

Thread Creation in Java

Thread Can be defined as the smallest execution part in with the process(an instance of the program running in computer).

We can see how to create a thread and rules around it in the coming blogs.
Thread Can be created using the below two approaches.

1. Implementing the runnable Interface.
2. Extending the Thread class.

Most of us if we tell there are two ways the next question to be asked his what is the best approach.
Many developers including me thinks that class should be extended when they are being enhanced or modified in someway.
So if will not be overide any of the threads other methods, it is probably best simply to implement the runnable.

1. Implementing the runnable Interface.

package com.searchendeca.thread.sample;

 class ThreadDemoImplements implements Runnable {

Thread t;
ThreadDemoImplements(){
t= new Thread(this,"MY Threadd");
t.start();
}

@Override
public void run() {
for(int n = 5; n > 0; n--) {
System.out.println("*********"+n);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
System.out.println("Main thread interrupted");
}
}

}

}
 class calling {

public static void main(String args[]) {
new ThreadDemoImplements();
try {
for(int i = 5; i > 0; i--) {
System.out.println("Main Thread: " + i);
Thread.sleep(1000);
}
} catch (InterruptedException e) {
System.out.println("Main thread interrupted.");
}
System.out.println("Main thread exiting.");
}

 }

 2. Extending the thread class.

 package com.searchendeca.thread.sample;

 class ThreadDemoExtends extends Thread {

ThreadDemoExtends(){
super("Demo Thread");
start();
}


public void run() {
for(int n = 5; n > 0; n--) {
System.out.println("*********"+n);
try {
Thread.sleep(500);
} catch (InterruptedException e) {
System.out.println("Main thread interrupted");
}
}

}

}
 class callingExtends {

public static void main(String args[]) {
new ThreadDemoExtends();
try {
for(int i = 5; i > 0; i--) {
System.out.println("Main Thread: " + i);
Thread.sleep(1000);
}
} catch (InterruptedException e) {
System.out.println("Main thread interrupted.");
}
System.out.println("Main thread exiting.");
}

 }

The difference between the thread creation using Implementing the runnable and  extending the thread class is no need to create the Instance in extending the thread class.

Producer Consumer Problem

Hi Readers Before getting in to the Producer Consumer Problem also know as bounder-buffer problem. which means producer is producing it and consumer is waiting for it,this continues to be wasting CPU Cycles.

Producer Consumer Problem is the classic example for this and we see through the below example 

Consider Below four classes:

// GetSet, the queue that you’re trying to synchronize; 
class GetSet {
int n;
boolean valueSet = false;
synchronized int get() {
System.out.println("Got: " + n);
return n;
}
synchronized void put(int n) {
this.n = n;
System.out.println("Put: " + n);
}
}

//Producer, the threaded object that is producing queue entries; 
class Producer implements Runnable {
GetSet q;
Producer(GetSet q) {
this.q = q;
new Thread(this, "Producer").start();
}
public void run() {
int i = 0;
while(true) {
q.put(i++);
}
}
}

//Consumer, the threaded object that is consuming queue entries; 
class Consumer implements Runnable {
GetSet q;
Consumer(GetSet q) {
this.q = q;
new Thread(this, "Consumer").start();
}
public void run() {
while(true) {
q.get();
}
}
}

//PC, the tiny class that creates the single GetSet,Producer, and Consumer.
class PC {
public static void main(String args[]) {
GetSet q = new GetSet();
new Producer(q);
new Consumer(q);
System.out.println("Press Control-C to stop.");
}
}

Press Control-C to stop.
Put: 0
Put: 1
Put: 2
Put: 3
Put: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4
Got: 4


As you can see, after the producer put 1, the consumer started and got the same 1 five
times in a row. Then, the producer resumed and produced 2 through 7 without letting
the consumer have a chance to consume them.

The Solution to avoid this pooling Java includes an elegant inter process communication mechanism via
the wait( ), notify( ), and notifyAll( ) methods.

Please see the following snippet

class GetSet {
int n;
Boolean visited=Boolean.FALSE;
synchronized int get() {
if(!visited) {
try {
wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
System.out.println("Got: " + n);
notify();
visited=Boolean.FALSE;
return n;
}
synchronized void put(int n) {
if(visited) {
try {
wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
this.n = n;
visited=Boolean.TRUE;
notify();
System.out.println("Put: " + n);
}
}



Here We are Producing and consuming in the above way, When using the above way Producer will be Suspended until the Producer notifies it is able to produce, and consumer can consume it.some output from this program, Hence wasting the CPU Cycles .
which shows the clean synchronous behavior:

Press Control-C to stop.
Put: 0
Got: 0
Put: 1
Got: 1

Hope Producer Consumer Problem is solved . Happy Reading !!!

Sunday, 30 December 2018

Indexing Process in Solr

The Following post defines how exactly the Indexing process in Solr works.

When we take the Indexing part there are multiple ways we can achieve in Solr such as.

  • Indexing using the post.jar
  • Indexing using the dataImport handlers.
  • Indexing by executing the curl commands.

We will concentrate more on the first two pointers of Indexing.

Indexing using the post.jar

As I already mentioned in previous posts that the Solr Ships with the exampleDocs from where we can do getting started.

Navigate to C:\Dev\solr-7.5.0\example\exampledocs

In this folder we have the sample xml and json files,using which we can use to Index the data.Also In the Same folder We have the post.jar that process these documents and Index it .

C:\Dev\solr-7.5.0\bin>java -jar -Dc=example -Dauto C:\Dev\solr-7.5.0\example\exampledocs\post.jar C:\MicroservicesPOC\solr-7.5.0\solr-7.5.0\example\exampledocs\ .*

Where -Dc is the name of the core.

-Dauto is the location where the post.jar resides.

This post.jar reads the collection and Index the documents given to it. But the condition here is that we have to follow the format the post.jar expects, otherwise the Indexing will not happen.

Indexing using the dataImport handlers

For the Second way of Indexing checkout my detailed post here using the DataImport handler.

Happy Indexing!!!

Tuesday, 25 December 2018

My First Lamda Expression using java 8

I was late trying out my first lambda expressions !!!!! But still I will give a try to sort the names using the Last Name. Here is my Code.


Step:1 Create a Pojo Class 

FileName:Person.java

package com.mycommercesearch.solr;

public class Person {
private String firstName;

private String lastName;

private int age;

@Override
public String toString() {
return "Person [firstName=" + firstName + ", lastName=" + lastName + ", age=" + age + "]";
}

public Person(String firstName, String lastName, int age) {
super();
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}

public String getFirstName() {
return firstName;
}

public void setFirstName(String firstName) {
this.firstName = firstName;
}

public String getLastName() {
return lastName;
}

public void setLastName(String lastName) {
this.lastName = lastName;
}

public int getAge() {
return age;
}

public void setAge(int age) {
this.age = age;
}

}



Step:2 Create Interface NameSorter 

FileName:NameSorter.java

package com.mycommercesearch.solr;

import java.util.List;

@FunctionalInterface
public interface NameSorter {
void soryByLastName(List<Person> person,String arrangement);
}


Step:3 Create a main class

FileName:NameTest.java

package com.mycommercesearch.solr;

import java.util.Arrays;
import java.util.Collections;
import java.util.List;

public class NameTest{
public void invokeSample(NameSorter nameTest,List<Person> person,String arrangement) {
nameTest.soryByLastName(person,arrangement);
}

public static void main(String args[]) {
//Here I have Initialized Person array with custom values.
List<Person> myPerson = Arrays.asList(new Person("Syed", "Ghouse", 27),
new Person("Manoj", "Kumar", 45),new Person("Chetan", "Bagath", 50),new Person("Eddapadi", "Palaniswamy", 60),new Person("Paneer", "Selvam", 55));
//Here I have written the lamda expression to print the List
SamplerTest sortName = (person,arrangeMent) -> {
if(!person.isEmpty()) {
person.forEach(ps-> System.out.println("Sorting using the last Name "+arrangeMent+"::"+ps.getFirstName()+" "+ps.getLastName()));
}
};
//Creating the object for my class
NameTest nameTest= new NameTest();
//Passing the behaviour to my interface
nameTest.invokeSample(sortName,myPerson,"Before");
//Invoking the sort by LastName
nameTest.sortLastName(myPerson);
//Passing the sorted bahviuor back to my previous lamda
nameTest.invokeSample(sortName,myPerson,"after");
}


public void sortLastName(List<Person> pes) {
Collections.sort(pes,(Person o1,Person o2)->{
Person p1= (Person)o1;
Person p2=(Person) o2;
return p1.getLastName().compareTo(p2.getLastName());
});
}
}


Output will be in the following format:


Sorting using the last Name Before::Syed Ghouse
Sorting using the last Name Before::Manoj Kumar
Sorting using the last Name Before::Chetan Bagath
Sorting using the last Name Before::Eddapadi Palaniswamy
Sorting using the last Name Before::Paneer Selvam

Sorting using the last Name after::Chetan Bagath
Sorting using the last Name after::Syed Ghouse
Sorting using the last Name after::Manoj Kumar
Sorting using the last Name after::Eddapadi Palaniswamy
Sorting using the last Name after::Paneer Selvam


We can look more on how to use the lamda expressions on our daily coding ways in the upcoming posts.

Happy Expressions!!!!

Monday, 24 December 2018

Core Creation in Solr

Before Starting anything into the Solr We have to create the Core. A Core is a running instance or the process of a Lucene index that contains all the Solr configuration files. We need to create a Core to perform operations like indexing and analyzing. It is mentioned that the Solr application may contain one core or more and can communicate with multiple cores.

Its similar to Creating the Endeca App. The Core can be created in two ways.


Creating through Command

Navigate to C:\Dev\solr-7.5.0\bin>solr.cmd create -c example

It will create the core for the Solr.

WARNING: Using _default configset with data have driven schema functionality. NOT RECOMMENDED for production use.
         To turn off: bin\solr config -c example -p 8983 -action set-user-property -property update.autoCreateFields -value false
INFO  - 2018-12-25 12:03:30.613; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop

Created a new core 'example'

Creating through Solr Admin UI

Navigate to AdminUI>core Admin>Add Core

Fill the following popup

name:<name of the solr>

instanceDir:<Directory where the solr-Config.xml is avalible> In case if the solr-Config.xml is not created the We can use it from the default config set that comes up with Solr.

C:\Dev\solr-7.5.0\server\solr\example

Copy the directory conf from the

C:\Dev\solr-7.5.0\server\solr\configsets\_default

to

C:\Dev\solr-7.5.0\server\solr\example\conf

Give the Instance Directory as C:\Dev\solr-7.5.0\server\solr\example

dataDir:<Directory Where the Indexing Files stored>

C:\Dev\solr-7.5.0\server\solr\example\data

Remaining config and schema leave it as it.

Here understanding two folder structure is important.

conf> Where the solr configurations are stored.

data>Where the indexed data files are stored in the non readable format.

Happy Coring !!!!

Understanding Solr and Admin Console

We have seen how to download and install from our previous posts, Now its time to Understand it further.
After unzipping into the Folder Observe the Folder Structure.

Folder Structure

C:\Dev\solr-7.5.0\




bin> This will be having the Command Files. From where we will start/stop the Solr.

Contrib> This Will have add-on plugins for specialized features of Solr

dist> This will have the main Solr .jar files.

docs> This will have the link for the online documentation.

example> This will have the example docs which can be used for learning and getting started purpose.

licenses> The licenses directory includes all of the licenses for 3rd party libraries used by Solr.

server> This is the core of the Solr, Official documentation defines it as a heart. This will have the Following

server>solr-webapp> -->Solr’s Admin UI

server>lib> -->Jetty libraries

server>logs> -->Log files

server>solr>configsets> --> Sample configsets


Solr Admin UI.

Solr has the default admin UI that can be accessed via the port number 8983

http://127.0.0.1:8983/solr/ or http://localhost:8983/solr/ 





This will have the core selector, Logging, Schema selection, Query  Execution, memory stats and more. For the developers from Endeca, it can also Similar to the jspref Orange Application in Endeca, with more features.

Happy Structuring !!!

Installing Solr

The Installation of Solr is very simple. On Comparing with other Search Platforms Which I worked, this is considered to be the simplest one in terms of installation.

Download the zip file from the official site of Solr. Its always good Practise to move to some development folder to proceed, instead of having it in the Downloads folder.

Prerequesties:

Make sure your java is compatible with the version of Solr you download.

Make Sure your java home and path variable is set.

Follow the below steps for Installation.

1. Unzip the zip file which we downloaded. Usually, the File is in the Following format solr-7.X.X.zip

Once you unzip it. Congratulations you are done with your installation.

We have a walk through explanation on the folder structure in a different post.

Starting the Solr.

Normal Mode

Consider my Solr in the following Directory C:\Dev\solr-7.5.0 then

Navigate to C:\Dev\solr-7.5.0\bin open the command prompt in this location

and execute the Following command C:\Dev\solr-7.5.0\bin>solr.cmd start

Solr is started with the following logs on the prompt.

INFO  - 2018-12-25 10:39:32.458; org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL Credential Provider chain: env;sysprop
Waiting up to 30 to see Solr running on port 8983
Started Solr server on port 8983. Happy searching!

Debug Mode

If you want the solr in Debug mode then execute the below command in the same location.

C:\Dev\solr-7.5.0\bin>solr.cmd start -a "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=18983"

This will start the solr in debug mode with listening to the port 18983. if you need detailed explannation check my post here.

By Default Solr is running on the port 8983, If that port is already occupied either stop the process in that port or start the solr with different port.

Stopping the Process running on the port

1.Identifying the process running on the port.(Windows)

netstat -ano | findstr :8983 will list all the ports currently running in the machine.

2.Killing the Process running on the port (Windows)

taskkill /PID <PID_NO> /F

C:\Dev\solr-7.5.0\bin>netstat -ano | findstr :8983
  TCP    0.0.0.0:8983           0.0.0.0:0              LISTENING       18596
  TCP    [::]:8983              [::]:0                 LISTENING       18596

C:\Dev\solr-7.5.0\bin>taskkill /PID 18596 /F
SUCCESS: The process with PID 18596 has been terminated.

Starting the Solr in a different port.

C:\Dev\solr-7.5.0\bin>solr.cmd start -p 8990

This will start the solr in different port.


Stopping the solr.


C:\Dev\solr-7.5.0\bin>solr.cmd stop -all

This will stop if the solr is running in all the ports.

Happy Installation !!!!!

Saturday, 7 July 2018

JAVA SE 6 Features and Enhancements Part-1

Hi All,

You might be thinking why I started writing the posts, related to the Java SE-6. This Idea came as part of my discussion with the friends. Because Java 9 is about to be released, but are we really using the features and implement effectively the answer is no. That's why I wanted to give the heads up at least we can keep in mind these are available.

When I jumped looked into this thought might be not that much bigger, on looking the release document found that's it's huge. In this post, I am going to quickly run through the changes happened in the Collections framework. While reading this document, came to know there are so much Which I need to concentrate on the basics. I will start writing those simpler basics for you and for my learning.I am adding some information I gathered in Java6 in collections frame work.

Let's try to Implement these in our daily coding.

These new collection interfaces are provided:

1.Deque a double ended queue, supporting element insertion and removal at both ends

This Deque can be used in our application, when you want to access the both the ends of the queus in the faster way, it has different methods that allows you to iterate without much effort. Unlike the List interface, this interface does not provide support for indexed access to elements.While Deque implementations are not strictly required to prohibit the insertion of null elements, they are strongly encouraged to do so. Users of any Deque implementations that do allow null elements are strongly encouraged not to take advantage of the ability to insert nulls. This is so because null is used as a special return value by various methods to indicated that the deque is empty.

When going through this API you will find confusing of difference between the add and offer.

1. Both are from different interfaces. add is from the collection, the offer is from the queue.
2. Add throws exception it is not able to add the element, where else the offer returns the false statement.


2.BlockingDeque

A Deque that additionally supports blocking operations that wait for the deque to become non-empty when retrieving an element, and wait for space to become available in the deque when storing an element.

3.NavigableSet<E>

A SortedSet extended with navigation methods reporting closest matches for given search targets. Methods lower, floor, ceiling, and higher return elements respectively less than, less than or equal, greater than or equal, and greater than a given element, returning null if there is no such element. A NavigableSet may be accessed and traversed in either ascending or descending order. The descendingSet method returns a view of the set with the senses of all relational and directional methods inverted. The performance of ascending operations and views is likely to be faster than that of descending ones.

4.NavigableMap<K,V>

A SortedMap extended with navigation methods returning the closest matches for given search targets. Methods lowerEntry, floorEntry, ceilingEntry, and higherEntry return Map.Entry objects associated with keys respectively less than, less than or equal, greater than or equal, and greater than a given key, returning null if there is no such key. Similarly, methods lowerKey, floorKey, ceilingKey, and higherKey return only the associated keys. All of these methods are designed for locating, not traversing entries.
A NavigableMap may be accessed and traversed in either ascending or descending key order. The descendingMap method returns a view of the map with the senses of all relational and directional methods inverted. The performance of ascending operations and views is likely to be faster than that of descending ones.

5.ConcurrentMap<K,V>

A Map providing additional atomic putIfAbsent, remove, and replace methods.

Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a ConcurrentMap as a key or value happen-before actions subsequent to the access or removal of that object from the ConcurrentMap in another thread.

The following concrete implementation classes have been added:

1.ArrayDeque<E>

Resizable-array implementation of the Deque interface. Array deques have no capacity restrictions; they grow as necessary to support usage. They are not thread-safe; in the absence of external synchronization, they do not support concurrent access by multiple threads. Null elements are prohibited. This class is likely to be faster than Stack when used as a stack, and faster than LinkedList when used as a queue.

Most ArrayDeque operations run in amortized constant time. Exceptions include remove, removeFirstOccurrence, removeLastOccurrence, contains, iterator.remove(), and the bulk operations, all of which run in linear time.

2.ConcurrentSkipListSet<E>

A scalable concurrent NavigableSet implementation based on a ConcurrentSkipListMap. The elements of the set are kept sorted according to their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used.

3.ConcurrentSkipListMap<K,V>

A scalable concurrent ConcurrentNavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.

4.LinkedBlockingDeque<E>

An optionally-bounded blocking deque based on linked nodes.

The optional capacity bound constructor argument serves as a way to prevent excessive expansion. The capacity, if unspecified, is equal to Integer.MAX_VALUE. Linked nodes are dynamically created upon each insertion unless this would bring the deque above capacity.

5.AbstractMap.SimpleEntry<K,V>

An Entry maintaining a key and a value. The value may be changed using the setValue method. This class facilitates the process of building custom map implementations. For example, it may be convenient to return arrays of SimpleEntry instances in method Map.entrySet().toArray

6.AbstractMap.SimpleImmutableEntry<K,V>

An Entry maintaining an immutable key and value. This class does not support method setValue. This class may be convenient in methods that return thread-safe snapshots of key-value mappings.

These existing classes have been retrofitted to implement new interfaces:

1. LinkedList<E>

Linked list implementation of the List interface. Implements all optional list operations, and permits all elements (including null). In addition to implementing the List interface, the LinkedList class provides uniformly named methods to get, remove and insert an element at the beginning and end of the list. These operations allow linked lists to be used as a stack, queue, or double-ended queue.

The class implements the Deque interface, providing first-in-first-out queue operations for add, poll, along with other stack and deque operations.

2.TreeSet<E>

A NavigableSet implementation based on a TreeMap. The elements are ordered using their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used.

This implementation provides guaranteed log(n) time cost for the basic operations (add, remove and contains).

All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index.
Most operations run in constant time (ignoring time spent blocking). Exceptions include remove, removeFirstOccurrence, removeLastOccurrence, contains, iterator.remove(), and the bulk operations, all of which run in linear time.

3.TreeMap<K,V>

A Red-Black tree based NavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.

Two new methods were added to the Collections utility class:

newSetFromMap(Map) - creates a general purpose Set implementation from a general purpose Map implementation.
There is no IdentityHashSet class, but instead, just use

Set<Object> identityHashSet=
    Collections.newSetFromMap(
        new IdentityHashMap<Object, Boolean>());

asLifoQueue(Deque) - returns a view of a Deque as a Last-in-first-out (Lifo) Queue.

Happy Learning !!!!! Keep watching for more basics .

Enabling SEO in Endeca

Hi All,

I wanted to discuss SEO in Endeca, as we all know the Search Engine Optimization plays an important role in any E-commerce Site. The Url we are navigating should be simpler and user-friendly so it will get more rank in the search engines. This is the main concept behind it, In Endeca as we all know it works only on the guided Navigation we have to apply the filter, these filters are nothing but the dimensions, it is evident that the Url is getting increased like anything with this long Ids. Endeca has SEO mechanism where we can hide/encrypt these dimensionIds for better viewing and simple Urls.

There are two ways you can achieve this .

1. Spring injection.
2. Defineing in the class and property file.

I am going to discuss about the first approach. That is implementing using injecting the Spring.

You can follow the below steps to achieve it. you can also refer the CRS for implementation, we are also going to take this as the base for our implementation.

1. Copy all the Jars from the folder \CommerceReferenceStore\Store\Storefront\j2ee-apps\Storefront\store.war\WEB-INF\lib

2. Next steps are to create the following files.

endeca-seo-url-config.xml

you can find this file from CommerceReferenceStore\Store\Storefront\j2ee-apps\Storefront\store.war\WEB-INF

spring-context.xml

you can find this file from CommerceReferenceStore\Store\Storefront\j2ee-apps\Storefront\store.war\WEB-INF

3.Create a reference for these files from the web.xml as like below.

 <context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/spring-context.xml</param-value>
  </context-param>

  Once you define the context here then by default spring injection is happened through here.

4. Congratulations SEO is enabled. Now you have to define the navigation state to use this SEO URL with the property

urlFormatter=/atg/spring/FromSpring/seoUrlFormatter

5. Restart the servers Now you will start seeing the SEO URLs for all of the navigation state.

6.Whats there in endeca-seo-url-config. SeoUrlFormatter is the class where you can write a logic to parse input required, this will get called for each handler defined .parsePathInfo can be utilised for the parsing it, by the way, we write to read the request parameters.

7. Hence Extend the class SeoUrlFormatter and create your custom logic, and override the method

8. Adding descriptors.

you have define the following formatter for the dimension which you need to enable the SEO. you really have the control overSEO from here.

appendAncestors=will append the ancestors.
appendDescriptor=will append the descriptors.
appendRoot=will append the root.

you can also define the separator here.

<bean id="categoryFormatter"
        class="com.endeca.soleng.urlformatter.seo.SeoDimLocationFormatter">

    <property name="key">
      <value>product.category</value>
    </property>

    <property name="appendRoot">
      <value>false</value>
    </property>

    <property name="appendAncestors">
      <value>true</value>
    </property>

    <property name="appendDescriptor">
      <value>true</value>
    </property>

    <property name="separator">
      <value>-</value>
    </property>

    <property name="rootStringFormatter">

      <bean class="com.endeca.soleng.urlformatter.seo.StringFormatterChain">
        <property name="stringFormatters">
          <list>
            <!-- replace 'product.category' with 'Category' -->
            <bean class="com.endeca.soleng.urlformatter.seo.RegexStringFormatter">
              <property name="pattern">
                <value>product.category</value>
              </property>

              <property name="replacement">
                <value>Category</value>
              </property>

              <property name="replaceAll">
                <value>false</value>
              </property>
            </bean>

            <!-- Execute the default string formatter chain -->
            <ref bean="defaultStringFormatterChain"/>
          </list>
        </property>
      </bean>
    </property>

    <property name="dimValStringFormatter">
      <ref bean="defaultStringFormatterChain"/>
    </property>

  </bean>

9. These are the steps for the SEO Happy Learning !!!!

Friday, 6 July 2018

Performance Monitor in Oracle Commerce (ATG)

Hi All,

Performance is considered to be the most important factor in the success of the developmental projects. In my earlier posts, I have shared my thoughts to do efficient development. If you are seeing this posts for the first I am going to share what are the steps to be followed for identifying how our code performs in the environment.

Oracle web commerce as a matured enterprise applications gives you away, you can use and find it if you are not using it there are also different tools available you can use and identify it. I am going to concentrate more into the Oracle Web Commerce Tools that is Performance Monitor.

I recommend the following steps below.

1. Find the performace of your service using the chrome developer tools,Soap UI,jprofiler and many tools avalaible in the market.
2. Identify the services that takes more time to respond.
3.start putting the Performance Monitor to the places which you suspect causing the more loopings or time consuming if you know it by guess. or else put in these places.Usually start of the method and handlers,droplets etc.

Here's how you can put it . This is the starting of the Performance Monitor . 

if (PerformanceMonitor.isEnabled()) {
PerformanceMonitor.startOperation(String pOpName, String pParameter);
}
The End of the Performance Monitor should be the same as like passing the start parameter.

if (PerformanceMonitor.isEnabled()) {
PerformanceMonitor.endOperation(String pOpName, String pParameter);
}

If you want to cancel the Operation then you can do by.

if (PerformanceMonitor.isEnabled()) {
PerformanceMonitor.cancelOperation(String pOpName, String pParameter);
}

Here pOpName is the operation with this name get listed down in the console, will see the viewing in the end part. I recomment to use the class name by the follwing way . So it will be easily identifiable.

this.getClass().getName()

Where the parameter is the any name you pass should be descriptive and menaningful. 

When you are starting the operation, it shoule be ended otherwise it will not be causing any significance in adding it . Also, you can start the operation with the multiple operations with the same arguments, you should pass unique for avoidng the collision between the operations.

After adding this promote the code to any environements and do the following .

1. enable the performance monitor

Navigate to the folowing component .

http://hostname:port/dyn/admin/atg/dynamo/admin/en/performance-monitor-config.jhtml

Here you can set the modes .

There are different modes avaliable .

1.NORMAL - track the stack of operations each thread is currently executing
2.TIME - keep statistics for how much time each operation takes
3.MEMORY - keep statistics for how much time and memory each operation takes
4.DISABLED

It will be disable by defult enable it by selecting any of the modes . Based on the type of performace you want to measure.

after enabling it to execute or invoke your service again .

Then Navigate to the dashboard or the console for the results .

You can view the results at . http://hostname:port/dyn/admin/atg/dynamo/admin/en/performance-monitor.jhtml

Here you can see with the operations name, when you click more you will get the indivituval parameters level results towards the following parameters .

Operation||Number of Executions || Average Execution Time (msec)|| Minimum Execution Time (msec) || Maximum Execution Time (msec) || Total Execution Time (msec)


With this, you can analyse which part you have to concentrate and simplify the logic and remove the lopping etc.

Happy Learning . I will try to cover the indivutuval modes in detail in the upcoming posts .

x

Wednesday, 13 June 2018

Configuring GroupingApplicationRoutingStrategy

The GroupingApplicationRoutingStrategy allows more flexible groupings of sites than
SiteApplicationRoutingStrategy does. For example, with GroupingApplicationRoutingStrategy,
you can have three sites handled by one EAC application and two other sites handled by a second EAC application. If a site has multiple languages, all records for the site are directed to the site’s EAC application, regardless of the language.

Mapping of applications to sites is done through the applicationGroupingMap property of the
GroupingApplicationRoutingStrategy component. This property is a Map where each key is the name
of an EAC application and the corresponding value is a list of the site IDs of the sites to be routed to that
application. 

Naviate to /atg/endeca/ApplicationConfiguration

set 

applicationRoutingStrategy=\
  /atg/endeca/configuration/GroupingApplicationRoutingStrategy

 To ensure that separate records are created for each EAC application, you need to add
the MultipleSiteVariantProducer to the variantProducers property of each
EndecaIndexingOutputConfig component. For example:

variantProducers+=/atg/search/repository/MultipleSiteVariantProducer

 Also, mention the siteIDsToIndex property with all the sites required to index in output config. please consider this as an important step, if you are not doing this your indexing will always be a failure.

 Set the routingObjectAdapter property of the /atg/endeca/index/IndexingApplicationConfiguration component to specify the ContextRoutingObjectAdapter component to use:

routingObjectAdapter=\
 /atg/endeca/index/configuration/GroupingContextRoutingObjectAdapter

Set the routingObjectAdapter property of the /atg/endeca/assembler/AssemblerApplicationConfiguration component to specify the RequestRoutingObjectAdapter component to use:

routingObjectAdapter=\
 /atg/endeca/index/configuration/GroupingRequestRoutingObjectAdapter

eg: ApplicationConfiguration

workbenchHostName=localhost

# Our Workbench Port
workbenchPort=8006

applicationRoutingStrategy=\
  /atg/endeca/configuration/GroupingApplicationRoutingStrategy

defaultLanguageForApplications=

applicationKeyToMdexHostAndPort=\
ClothSiteAPP=localhost:15000,\
ApprealSiteApp=localhost:16000

keyToApplicationName^=/Constants.null

where ClothSiteAPP & ApprealSiteApp are EAC apps of the site. 


eg: GroupingApplicationRoutingStrategy

applicationGroupingMap=\
 ClothSiteAPP=ClothSite|shoeSiteCanada,\
 ApprealSiteApp=ApprealSite|clothesSiteUK|clothesSiteCanada

 It is necessary to mention two groups here otherwise you won't the response back from the assembler.

 For these group of sites, these apps will be used.

eg:IndexingApplicationConfiguration

CASHostName=localhost
CASPort=8500
EACHostName=localhost
EACPort=8888
routingObjectAdapter=\
  /atg/endeca/index/configuration/GroupingContextRoutingObjectAdapter
cxfLogLevelOverride^=/Constants.null


eg: AssemblerApplicationConfiguration

routingObjectAdapter=\
  /atg/endeca/assembler/configuration/GroupingRequestRoutingObjectAdapter

useFileStoreFactory=true


applicationKeyToStoreFactory=\
ClothSiteAPP=/atg/endeca/assembler/cartridge/manager/ClothSiteFileStoreFactory,\
ApprealSiteApp=/atg/endeca/assembler/cartridge/manager/ApprealSiteFileStoreFactory


Here you cannot use the default file store Factory, you have to use the filestore factory for separate instances.

Create a component 

eg :ClothSiteFileStoreFactory with the below data

$class=atg.endeca.assembler.content.ExtendedFileStoreFactory
configurationPath=\
C:\\Endeca\\Apps\\ClothSite
appName=ClothSiteApp

Last you mention about the 

eg:EndecaAdministrationService

/atg/endeca/assembler/admin/EndecaAdministrationService

$class=atg.endeca.assembler.MultiAppAdministrationService
storeFactory^=/Constants.NULL

Set this property to handle the multiple Applications.

Happy Learning !!!! 

Configuring SiteApplicationRoutingStrategy

Use the SiteApplicationRoutingStrategy if you have a separate EAC application for each site (with all languages in a given site being handled by that site’s EAC application), or if you have a separate EAC application for each combination of site and language. Make sure you are creating a separate app for the site.

Naviate to /atg/endeca/ApplicationConfiguration

set 
applicationRoutingStrategy=\
 /atg/endeca/configuration/SiteApplicationRoutingStrategy

 In addition, to ensure that separate records are created for each site, you need to add the UniqueSiteVariantProducer to the variantProducers property of each EndecaIndexingOutputConfig component. For example ProductCatalogOutputConfig,MediaOutputConfig,ArticleOutputConfig.

 variantProducers+=/atg/search/repository/UniqueSiteVariantProducer

 Also, mention the siteIDsToIndex property with all the sites required to index in output config. please consider this as an important step, if you are not doing this your indexing will always be a failure.

 Set the routingObjectAdapter property of the/atg/endeca/index/IndexingApplicationConfiguration component to specify the ContextRoutingObjectAdapter component to use:

routingObjectAdapter=\
 /atg/endeca/index/configuration/SiteContextRoutingObjectAdapter

Set the routingObjectAdapter property of the /atg/endeca/assembler/AssemblerApplicationConfiguration component to specify the RequestRoutingObjectAdapter component to use:

routingObjectAdapter=\
 /atg/endeca/index/configuration/SiteRequestRoutingObjectAdapter

eg: ApplicationConfiguration

workbenchHostName=localhost

# Our Workbench Port
workbenchPort=8006

applicationRoutingStrategy=\
 /atg/endeca/configuration/SiteApplicationRoutingStrategy

defaultLanguageForApplications=

keyToApplicationName=\
 ClothSite=ClothSiteAPP,\
 ApprealSite=ApprealSiteApp

applicationKeyToMdexHostAndPort=\
ClothSite=localhost:15000,\
ApprealSite=localhost:16000

where ClothSite & ApprealSite are sitesIds of the application

eg: SiteApplicationRoutingStrategy

eg:filterByLocale=true
applicationNameFormatString={0}{1}

eg:IndexingApplicationConfiguration

CASHostName=localhost
CASPort=8500
EACHostName=localhost
EACPort=8888
routingObjectAdapter=\
 /atg/endeca/index/configuration/SiteContextRoutingObjectAdapter
cxfLogLevelOverride^=/Constants.null


eg: AssemblerApplicationConfiguration

routingObjectAdapter=\
 /atg/endeca/index/configuration/SiteRequestRoutingObjectAdapter

useFileStoreFactory=true

applicationKeyToStoreFactory=\
ClothSite=/atg/endeca/assembler/cartridge/manager/ClothSiteFileStoreFactory,\
ApprealSite=/atg/endeca/assembler/cartridge/manager/ApprealSiteFileStoreFactory


Here you cannot use the defaultFileStore Factory, you have to use the filestore factory for separate instances.

Create a component 

eg: ClothSiteFileStoreFactory with the below data

$class=atg.endeca.assembler.content.ExtendedFileStoreFactory
configurationPath=\
C:\\Endeca\\Apps\\ClothSite
appName=ClothSiteApp


Last you mention about the 

eg:EndecaAdministrationService

/atg/endeca/assembler/admin/EndecaAdministrationService

$class=atg.endeca.assembler.MultiAppAdministrationService
storeFactory^=/Constants.NULL

Set this property to handle the multiple Applications.

Happy Learning !!!! 

Configuring Single ApplicationRoutingStrategy

This ApplicationRoutingStrategy by default available and you need to follow below steps for configuration

Naviate to /atg/endeca/ApplicationConfiguration

Set applicationRoutingStrategy property to null. This property is null by default, so you can leave it unset or set it to null explicitly. If applicationRoutingStrategy is null, an instance of the SingleApplicationRoutingStrategy class is created automatically.

Similarly, set the /atg/endeca/index/IndexingApplicationConfiguration .routingObjectAdapter and /atg/endeca/assembler/AssemblerApplicationConfiguration.routingObjectAdapter properties to null to automatically create instances of the SingleContextRoutingObjectAdapter and SingleRequestRoutingObjectAdapter
classes.

Additional configuration differs depending on whether you have a single EAC application for all languages or a separate EAC application for each language.

eg: ApplicationConfiguration
baseApplicationName=ATG
defaultApplicationName=ATG
# Our Workbench Host
workbenchHostName=localhost

# Our Workbench Port
workbenchPort=8006

If your application Name is ATG configure like above in ApplicationConfiguration. 

If you have different locales configure like these, Set this property if you have different locales otherwise set this property to Null.

defaultLanguageForApplications=fr

if this is set along with locale then during indexing the Application looks for the EAC application as ATGfr,ATGen etc.

eg:IndexingApplicationConfiguration 

CASHostName=localhost
CASPort=8500
EACHostName=localhost
EACPort=8888

eg:AssemblerApplicationConfiguration

defaultMdexHostName=localhost
defaultMdexPort=15000


Happy Learning !!!!

RoutingStrategy in Endeca

Hi guys, Sorry for taking more time to get back on 2018. I was busy with my regular works and did not find time in sharing the experience with you all. I was preparing this good contents for you these days, I am sure you are going to like all my upcoming topics, which is very interesting and concept wise more weight for you as an Endeca developer.

Why do want to go for routing?

The answer is simple !!!! my requirement of the application will not be the same at all the time. Sometimes my application works as a multisite, single site, group of sites. When we go with the default way Endeca provides for the application, it will be very hard to achieve it hence Endeca provides the way by which you can simplify the tasks. 

I used to term it as simplification, but the actual definition as per the document goes by this way "Routing is the process of directing records for indexing to specific EAC applications and their corresponding MDEX instances, and ensuring that queries (for example, search terms or dimension selections) are directed to the correct EAC applications as well." Remeber this you have to define the only by this way.

So we understand the routing and let's see what are the different types of routing supported by The platform Guided Search 11.3(Endeca)

There are three types RoutingStrategy

1. Single ApplicationRoutingStrategy

This is the default routing statergy the endeca comes with, It was recommended to use this statergy when we have no site based application, single mdex data and only one EAC application. Locale and language does not make difference. Configuring the  ApplicationRoutingStrategy check here .

2. SiteApplicationRoutingStrategy

This startergy can be used when we have the site based applications, for example, I have multisite application, different EAC applications for the sites, differnt mdex's and different data then we can go for this approach (with all languages in a given site being handled by that site’s EAC application), or if you have a separate EAC application for each combination of site and language. Configuring the SiteApplicationRoutingStrategy check here .

3. GroupingApplicationRoutingStrategy

This statergy can be used when we have the Single EAC applications for the group of sites, it is more flexible than the SiteApplicationRoutingStrategy regardless of the language. Configuring the GroupingApplicationRoutingStrategy check here .

Happy Learning Stay Tuned for featured posts !!!

Friday, 20 October 2017

About Editors in Endeca

Hi All,
   
   Happy Diwali !!!

Most of us when working on the XM Part of Endeca, dont explore much on the editors part since we are going with the OOTB editors , there was no necessary as well to learn about this OOTB editors .

So today in this post am going to explain some basics about this editors. What are  editors ? Editors are nothing but medium with the help of which we configure the data that flows from the xm. 

So What are the Default Editors are available ? There were many editors are available out of which some are very important for the basic operations are Boost and Burry, Choice,Link,Media,String etc.

So If you want to overwrite the existing Editors , you have follow some default steps that is being overridden .We can see this topic as future posts from developing via SDK.

Now how to change their configurations ? What are configurations , its nothing but the Values supporting the Functioning of this editors. What are the Inputs , what are its Configuration Files , this can be helpful in the many context of customization .How to Achieve this , follow the below steps .

1) Export the Existing Editors using   

runcommand.bat IFCR exportContent editors D:\backup\editors 

After the Editors whatever value you provide will be the Place where it gets exported .

2)After the Export Edit the Configs with the Custom Values which you need it .

3)Import the editors using 

runcommand.bat IFCR importContent editors D:\backup\editors 

Once the Importing is done those editors with the new Configs are avaliable .

So Extending and Creating the new Cartridges will be seen in the Future Posts !!!


Happy UnderStanding !!!!!