Friday 28 April 2017

Consuming REST API in PHP Using Guzzle

If you are familiar with Rest API, you must know about HTTP calls created for getting and posting data from the client to the server. What if you wish to create a REST API client in PHP? Your answer would be to go with CURL. CURL is the most widely used method to make HTTP calls but it contains several complicated steps.
Let’s see a simple CURL request in PHP:
$url = “
$resource = curl_init($url);
curl_setopt($ch, CURLOPT_HTTPHEADER, [‘Accept:application/json, Content-Type:application/json’]);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, ‘GET’);
You need to call the curl_setopt() method to define the header and the particular HTTP verb like GET, POST, PUT, etc. It does look pretty complicated. So, what is a better and robust alternative?
Here comes Guzzle.
Let’s see how Guzzle creates a request:
$client = new GuzzleHttp\Client();
$res = $client->request(‘GET’, ‘’, [
‘headers’ => [
‘Accept’ => ‘application/json’,
‘Content-type’ => ‘application/json’
You could see that it is simple. You just need to initialize the Guzzle client and give HTTP verbs and a URL. After that, pass the array of headers and other options.

Understand the Guzzle Client

Guzzle is a simple PHP HTTP client that provide an easy method of creating calls and integration with web services. It is the standard abstraction layer used by the API to send messages over the server. Several prominent features of Guzzle are:
  1. Guzzle can send both synchronous and asynchronous requests.
  2. It provides a simple interface for building query strings, POST requests, streaming large uploads & downloads, uploading JSON data, etc.
  3. Allows the use of other PSR7 compatible libraries with Guzzle.
  4. Allows you to write environment and transport agnostic code.
  5. Middleware system allows you to augment and compose client behavior.

Install Guzzle In PHP

The preferred way of installing Guzzle is Composer. If you haven’t installed Composer yet, download it from here
Now to install Guzzle, run the following command in SSH terminal:
composer require guzzlehttp/guzzle
This command will install the latest version of Guzzle in your PHP project. Alternatively you can also define it as a dependency in the composer.json file and add the following code in it.
 “require”: {
 “guzzlehttp/guzzle”: “~6.0”
After that, run the composer install command. Finally you need to require the autoloader and added some more files to use Guzzle:
require ‘vendor/autoload.php’;
use GuzzleHttp\Client;
use GuzzleHttp\Exception\RequestException;
use GuzzleHttp\Psr7\Request;
The installation process is over and now it’s time to work with a real example of creating HTTP calls with an API. For the purpose of this article, I will work with Cloudways API.

What You Can Do With the Cloudways API

Cloudways is a managed hosting provider for PHP, Magento, WordPress and many other frameworks and CMS. It has an API that you could use for performing CRUD operations on servers and applications. Check out popular use cases of the Cloudways API to see how you could integrate it into your projects.
In this article, I am going to create HTTP calls to perform specific operations with Guzzle on Cloudways server.

Create the HTTP Requests In Guzzle

As I mentioned earlier, creating HTTP requests in Guzzle is very easy; you only need to pass the base URI, HTTP verb and headers. If there is an authentication layer in the external API, you can also pass these parameters in Guzzle. Similarly, Cloudways API needs email and API key to authenticate users and send the response. You need to sign up for a Cloudways account to get your API credentials.
Let’s start by creating a CloudwaysAPIClient.php file to set up Guzzle for making HTTP calls. I will also create a class and several methods using HTTP calls in them.
The URL of the API does not change so I will use const datatype for it. Later on, I will concatenate it with other URL suffixes to get the response. Additionally, I have declared variables $auth_key,$auth_email which will hold the authentication email and the API key. $accessToken will hold the temporary token which will be renewed every time.
Class CloudwaysAPIClient
private $client = null;
const API_URL = “";
var $auth_key;
var $auth_email;
var $accessToken;
public function __construct($email,$key)
$this->auth_email = $email;
$this->auth_key = $key;
$this->client = new Client();

Create a Post Request to get Access Token

The access token will be generated from this URL: every time I access the API. This will be set in $url with additional data array which holds auth credentials. Later on, I created a POST request with the base URL and query string. The response will be decoded and access token is saved to be used within the methods.
public function prepare_access_token()
$url = self::API_URL . “/oauth/access_token”;
$data = [‘email’ => $this->auth_email,’api_key’ => $this->auth_key];
$response = $this->client->post($url, [‘query’ => $data]);
$result = json_decode($response->getBody()->getContents());
$this->accessToken = $result->access_token;
catch (RequestException $e)
$response = $this->StatusCodeHandling($e);
return $response;
Here the POST request for getting access token is completed. Additionally, If you observed in the exception handling, I declared a method StatusCodeHandling($e), which will take care of the response codes (HTTP codes like 404, 401, 200 etc), and throw a related exception.
public function StatusCodeHandling($e)
if ($e->getResponse()->getStatusCode() == ‘400’)
elseif ($e->getResponse()->getStatusCode() == ‘422’)
$response = json_decode($e->getResponse()->getBody(true)->getContents());
return $response;
elseif ($e->getResponse()->getStatusCode() == ‘500’)
$response = json_decode($e->getResponse()->getBody(true)->getContents());
return $response;
elseif ($e->getResponse()->getStatusCode() == ‘401’)
$response = json_decode($e->getResponse()->getBody(true)->getContents());
return $response;
elseif ($e->getResponse()->getStatusCode() == ‘403’)
$response = json_decode($e->getResponse()->getBody(true)->getContents());
return $response;
$response = json_decode($e->getResponse()->getBody(true)->getContents());
return $response;
The main client class is now completed. I will extend it to create more HTTP requests for different cases.

Create a GET Request to Fetch All Servers

Once the User is authenticated, I can fetch all my servers and applications from Cloudways. /server is the suffix concatenated with the base URI. This time, I will attach the accessToken with Authorization string in Guzzle header to fetch all servers in JSON response. To do this, create a new method:
Public function get_servers()
$url = self::API_URL . “/server”;
$option = array(‘exceptions’ => false);
$header = array(‘Authorization’=>’Bearer ‘ . $this->accessToken);
$response = $this->client->get($url, array(‘headers’ => $header));
$result = $response->getBody()->getContents();
return $result;
catch (RequestException $e)
$response = $this->StatusCodeHandling($e);
return $response;
Now create index.php file and include CloudwaysAPIClient.php at the top. Next, I will declare my API key and email, passing it to the class constructor to finally get the servers.
include ‘CloudwaysAPIClient.php’;
$api_key = ‘W9bqKxxxxxxxxxxxxxxxxxxxjEfY0’;
$email = ‘’;
$cw_api = new CloudwaysAPIClient($email,$api_key);
$servers = $cw_api->get_servers();
echo ‘<pre>’;
echo ‘</pre>’;
Let’s test it in Postman to verify that the information and right response codes are being fetched.
So all my servers hosted on the Cloudways Platforms are being fetched. Similarly, you can create new methods with HTTP calls to get applications, server settings, services and etc.
Let’s create a PUT call to change the label of the server which is cloned-php applications at the moment. But first, I need to get the server ID & label because this information will be used as an argument. To get the server ID, create a foreach loop in the index.php file:
foreach($servers->servers as $server){
echo $server->id;
echo $server->label;
Now, if I hit the API, it will fetch the server id and label.

Create a PUT Request to Change Server Label

Now to change the server label, I need to create a PUT call in Guzzle. I will extend the class with a new method. Remember that server id and label are two necessary parameters that will be passed in the method.
public function changelabel($serverid,$label)
$url = self::API_URL . “/server/$serverid”;
$data = [‘server_id’ => $serverid,’label’ => $label];
$header = array(‘Authorization’=>’Bearer ‘ . $this->accessToken);
$response = $this->client->put($url, array(‘query’ => $data,’headers’ => $header));
return json_decode($response->getBody()->getContents());
catch (RequestException $e)
$response = $this->StatusCodeHandling($e);
return $response;
Now in the index.php, put this condition beneath the foreach loop.
if($server->id == ‘71265’ && $server->label == ‘Cloned-php applications’){
$label = ‘Cloudways Server’;
$changelabel = $cw_api->changelabel($server->id,$label);
When testing this in Postman, I will get the updated server label.

Create a Delete Request to Remove a Server

To delete a server using Cloudways API, I need to create a Delete request in Guzzle through the following method. This is pretty similar to the above method, because it also requires two parameters, server id and label.
public function deleteServer($serverid,$label)
$url = self::API_URL . “/server/$serverid”;
$data = [‘server_id’ => $serverid,’label’ => $label];
$header = array(‘Authorization’=>’Bearer ‘ . $this->accessToken);
$response = $this->client->delete($url, array(‘query’ => $data,’headers’ => $header));
return json_decode($response->getBody()->getContents());
catch (RequestException $e)
$response = $this->StatusCodeHandling($e);
return $response;
Try this in Postman or just refresh the page. The server will be deleted.

Final Words

Guzzle is a flexible HTTP client that you could extend as per your requirements. You can also try out new ideas with uploading data, form fields, cookies, redirects and exceptions. You can also create middleware for authentication layer (if needed). All in all, Guzzle is a great option for creating REST API in PHP, without using any frameworks.
If you have any questions or query you can comment below.

Thursday 27 April 2017

Go language and it's features

Go language and it's features
In past couple of years, there is a rise of new programming language: Go or GoLang. It is a free and open source programming language created at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. It is a compiled, statically typed language in the tradition of Algol and C, with garbage collection, limited structural typing, memory safety features and CSP-style concurrent programming features added.

But what kinds of projects are Go best for building, and how is that likely to change as the language evolves through new versions and grows in popularity?
1. Network and Web servers -
Network applications live and die by concurrency, and Go's native concurrency features goroutines and channels, mainly are well suited for such work. Consequently, many Go projects are for networking, distributed functions, or services: APIs, Web servers, minimal frameworks for Web applications, and the rest. Go programmers like that the items they use most in such projects are either a part of the language, such as goroutines for threadlike behavior, or available in the standard library like Go's http package.

2. Stand-alone command-line apps or scripts -
It's easy to put out simple command-line apps that run most anywhere. It's another echo of Go's similarities to Python. The executables created by Go are precisely that: Stand-alone executables, with no external dependencies unless you specify them. Another advantage Go has here is speed. The resulting executables run far faster than vanilla Python, or for that matter most any other dynamically executed language, with the possible exception of JavaScript.

3. Desktop or GUI-based apps -
Right now, the culture of software around building rich GUIs for Go applications, such as those in desktop applications, is still scattered. That said, various projects exist there are bindings for the GTK and GTK3 frameworks, and another intended to provide platform-native UIs, although the latter relies on C bindings and is not written in pure Go. Windows users can try out walk, and some folks at Google are in the process of building a cross-platform GUI library.

4. System-level programming -
While Go can talk to native system functions, it's not as good a fit for creating extremely low-level system components, like embedded systems design, kernels, or device drivers. Some of this is a by-product of the language's intentions, since the runtime and the garbage collector for Go applications are dependent on the underlying OS.

Even though Go is very different from other object-oriented languages, it is still the same beast. Go provides you high performance like C/C++, super efficient concurrency handling like Java and fun to code like Python/Perl.

Saturday 22 April 2017

What is data mining?

Data mining is the process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. In other words, data mining is mining knowledge from data. It uses sophisticated mathematical algorithms to segment the data and evaluate the probability of future events.
While large-scale information technology has been evolving separate transaction and analytical systems, data mining provides the link between the two. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. It also able to answer questions that cannot be addressed through simple query and reporting techniques. Generally, any of four types of relationships are sought:
  • Classes - Stored data is used to locate data in predetermined groups.
  • Clusters - Data items are grouped according to logical relationships or consumer preferences.
  • Sequential patterns - Data is mined to anticipate behavior patterns and trends.
  • Associations: Data can be mined to identify associations. The beer-diaper example is an example of associative mining.

Data mining tools and techniques

Data mining techniques are used in many research areas, including mathematics, cybernetics, genetics and marketing. While data mining techniques are a means to drive efficiencies and predict customer behavior, if used correctly, a business can set itself apart from its competition through the use of predictive analysis.
Web mining - a type of data mining used in customer relationship management, integrates information gathered by traditional data mining methods and techniques over the web. Web mining aims to understand customer behavior and to evaluate how effective a particular website is.
Other data mining techniques include network approaches based on multitask learning for classifying patterns, ensuring parallel and scalable execution of data mining algorithms, the mining of large databases, the handling of relational and complex data types, and machine learning.

Benefits of data mining

In general, the benefits of data mining come from the ability to uncover hidden patterns and relationships in data that can be used to make predictions that impact businesses. Today, data mining is primarily used by companies with a strong consumer focus - retail, financial, communication, and marketing organizations. It enables them companies to determine relationships among internal factors such as price, product positioning, or staff skills, and external factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. 
With data mining, a retailer could use point-of-sale records of customer purchases to send targeted promotions based on an individual's purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to appeal to specific customer segments. 

Sunday 16 April 2017

design free dynamic website using Google firebase

In this small tutorial we will be covering almost all topics related dynamic web application and in each chapter we will be developing many web apps using firebase.

Firebase is a free(for limited access) framework by Google. Official link Firebase Link
There are basically two main types of website - static and dynamic. A static site is one that is usually written in plain HTML and what is in the code of the page is what is displayed to the user. A dynamic site is one that is written using a server-side scripting language such as PHP, ASP, JSP, or Coldfusion.

Chapter 1:Firebase installation setup.

Click on Add project and fill the details. 
Now after this click on Add Firebase to your web app . 

Copy and paste the script to your web page. If you have not created one then use below code and create new file with html extension like mypage.html .By the these all are basic things which you should know already.
<!DOCTYPE html> 
<html lang="en"> 
<head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> 
<meta http-equiv="X-UA-Compatible" content="ie=edge"> 
<title>Document</title> </head> 
//Paste your firebase script just above closing body tag. 
<script src="">
<script src="">
<script src="">
<script src="">
Don't forget to paste above CDN just before your firebase script.

Now It is very important to set your firebase database as shown in above snapshot.

Now its time to code your web page

Inside your body tag above your firebase script make one header tag. 
<h id="text">Here we will show data from firebsae database<h>
Now inside firebase script tag we will try to set data for h tag using id="text". 
var text = document.getElementById('text');
var dbRef = firebase.database().ref().child('text');
dbRef.on('value',snap => text.innerText = snap.val());

free and open source IDE for Linux user

free and open source IDE for Linux user
Nowadays people are turning toward programming and they are successfully building great applications. Linux an open source operating system is used worldwide on many desktops, servers and mobile devices. The main reason Linux is much loved is because it provides great security and stability, less expensive than other software's, protects privacy and user control over their own hardware.

Linux is everywhere and it has great benefits for programmers in Linux. If you love Linux programming you can really have a good career in system administration and learning Linux can sure shot land you with good job title in market.

As we all know that Programming is all about typing and typing. And our programmers constantly keep on searching and worrying about text editors to help them in their coding. At this point, knowing some of Best IDE’s comes in handy, to save your time and mental efforts. Many programmers learn to code by using a text editor, but in time they move towards using an IDE as it makes the art of coding efficient and quicker. To provide a sharpness into the quality of software which are available, for Linux.

Here is the list of all powerful IDE for Linux -

10. Geany - Geany is a lightweight IDE and it supports all major languages. It was designed specifically to provide a fast and small IDE, and it needs only the GTK2 libraries to remain independent from Desktop Environments. It has all basic features such as, auto-indent, syntax highlighting and auto-complete code or snippets etc. Geany is a clean and provides larger space to work in. So if you want a lightweight and pretty basic IDE for your development then go with Geany.

9. zend Studio - Developers of PHP use Zend for faster coding, resolving issues easily and to integrate freely inside the cloud. It has power pack of tools such as Zend Studio, PHP Unit and Composer which forms one stop shop for mobile app developers and PHP developers.

8. CodeLite - CodeLite is a free, open-source, cross-platform IDE for the C, C++, PHP, and Node.js programming languages. To comply with CodeLite's open source spirit, the program itself is compiled and debugged using only free tools for Mac OS X, Windows, Linux and FreeBSD, though CodeLite can execute any third-party compiler or tool that has a command-line interface. 

7. Gedit - Gedit is an IDE that comes pre-installed with the Gnome Linux dekstop environment. It is a very simple and small IDE but it can be customized to fit your working environment by installing plugins and configuring existing settings. Gedit does not provide the easiest way to install plugins but you can download the plugins and then install them manually.

6. KATE - It's the text editors that comes pre-installed with KDE desktop environment. KATE is a lightweight and fast text editors and it can open multiple files simultaneously. KATE is simple yet powerful IDE. It supports great number of languages and auto-detect the language sets the indentation for document automatically. Programmer can split window to work with multiple documents simultaneously. KATE has embedded terminal, SQL plugin, Find & replace, session support, syntax highlighting, smart comment and uncomment handling, bracket matching, KATE takes backup automatically so in case of crash or unexpected shutdown your work don't get lost.

5. Bluefish Editor - It is a free and open source development project targeted towards web developers and programmers.  If you are a web developer then Bluefish editor can be a good choice. It supports many advanced features such as auto-completion of tags, auto-indentation, powerful search & replace, support of integration of external programs such as make, lint, weblint etc.

4. Brackets - Brackets is the IDE developed by Adobe developers. It is the IDE for you if you're a web designer. There are several awesome features in Brackets that make it stand out. Brackets supports plugins to extend functionalities and installing plugins is really easy. Beside all the basic features such as auto-indentation, auto-completion and code highlighting, Brackets has advanced features that really help you while you're editing web pages and working with CSS, and some of which features are Inline editing, Editor splitting, plugins and many more.

3. Eclipse - Free, open-source editor made for heavy Java development. It is more advanced and robust. Eclipse is mostly written in JAVA and it is primarily used for developing JAVA applications. But, the language support can be extended by installing plugins. So with plugins support Eclipse becomes one of the best IDEs to develop programs in C, C++, COBOL, Fortan, Haskell, JavaScript, PHP, Perl, Python, R, Ruby and Ruby on Rails, Scheme and many more.

2. Atom - Atom is the IDE developed by Github and it is completely hackable which means you can customize it as you want. It supports large number of programming languages by default like php, javascript, HTML, CSS, Sass, Less, Python, C, C++, Coffeescript, etc. and you can extends it's languages supported by install plugin.

1. Sublime Text - The one that wins the list is obviously, Sublime Text. The lightest of all and feature rich IDE used by professional programmers. Besides all the basic features, Sublime has the most powerful features that let programmers do coding really fast. Sublime has so many powerful features like code highlighting, auto-indent, auto-completion and all basic features, Sublime has all of them packed. Sublime Text contains 22 different visual themes, with the option to download additional themes and configure custom themes via third-party plugin. Sublime Text is the popular replacements and main competitors - Atom, BBEdit, TextMate, Notepad++, Emacs, vim, Brackets, Visual Studio Code, and others.

What is AI and how it will change our future?

What is AI and how it will change our future?
Since the invention of computers or machines, their capability to perform various tasks went on growing exponentially. Humans have developed the power of computer systems in terms of their diverse working domains, their increasing speed, and reducing size with respect to time.

A branch of Computer Science named Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains. However, the five decades since the inception of AI have brought only very slow progress, and early optimism concerning the attainment of human-level intelligence has given way to an appreciation of the profound difficulty of the problem.

According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.

We’re seeing ongoing discussion around evaluating AI systems with the Turing Test, warnings that hyper-intelligent machines are going to slaughter us and equally frightening, if less dire, warnings that AI and robots are going to take all of our jobs.  In parallel we have also seen the emergence of systems such as IBM Watson, Google's Deep Learning, and conversational assistants such as Apple's Siri, Google Now and Microsoft's Cortana. Mixed into all this has been crosstalk about whether building truly intelligent systems is even possible.

How AI will change our future?
Humanity is moving forward at great strides, or at least at the technological level. No aspect of our lives goes by without technology touching it somehow, either for better or worse, and we’re only in the beginning stages. So what’s in store for the future? Robots and artificial intelligence, and further down the long, winding path of history, transhumanism, the cherry on the cake. While robots and AI are not new, it’s taken some time to develop them.

Around 200 years ago the industrial revolution immutably remoulded society. Today another revolution is underway with potentially even further reaching consequences. Modern robots can now replicate the movements and actions of humans, the next challenge lies in creating autonomous, self-thinking robots that can react to changing conditions. Artificial intelligence promises to give machines the ability to think analytically, using concepts and advances in computer science, robotics and mathematics. Once they are perfected, nothing will be the same. Here is the fields AI will affect our lives -

1. Better Weather Predictions - Predicting the weather accurately can be tricky, especially when you have to go through large volumes of data, but thanks to artificial intelligence software currently being developed that may soon change. The software will be able to sift through all the available data, get a clearer and better picture of approaching weather phenomena and issue the corresponding early warnings, thus saving lives.

2. Tackling Household Chores - One of the earliest promises of AI described in science fiction from Isaac Asimov to the Jetsons was robots that could perform household chores and eliminate the drudgery from the workplace. That promise has been fulfilled in part by the programmable robotic vacuum cleaner in your home, which can maneuver around obstacles like stairs, furniture and even the cat. Intelligent robots will not only clean your living room and do the dishes, but may also tackle jobs like assembling furniture or caring for kids and pets.

3. Autonomous Transportation - The autonomous, driverless car, is already here thanks to Google, and several US States have already passed legislation allowing them to roll down the road. The technology uses a LIDAR laser radar system and a range finder. The system allows the vehicle to generate a detailed 3D map of its environment. The car then takes these generated maps and combines them with high-resolution maps of the world, producing different types of data models that will allow it to drive itself.

4. Space Exploration - Artificial intelligence and robots will play a major role in space travel in the not-so-distant future. NASA already depends on unmanned shuttles, rovers and probes to explore distant galaxies that would take years for humans to reach. Autonomous land rovers have recently given researchers a treasure trove of data and photographs collected from the Martian surface, where inhospitable conditions make human exploration impossible. These smart vehicles sense obstacles, like craters, and find safe paths of travel around them before returning to the shuttle.

5. Always on Guard - Artificial intelligence is widely used to protect families from burglaries and the country from terrorist threats. The U.S. Department of Homeland Security uses an array of AI technology to safeguard the nation, including virtual smart agents to supplement its human workforce, and sophisticated software monitoring systems, which scan phone calls and other communications by sifting through large volumes of data quickly and sorting out casual conversations from potential threats. Modern home alarm systems that use AI distinguish between occupants and unknown persons.

AI is also a fundamental part of the concept of the Internet of Things – a world where machines and devices all communicate with each other to get the work done, leaving us free to relax and enjoy life.

However, as we've previously seen with the internet revolution, and the big data revolution, and all the other technological revolutions of recent times, there are obstacles to be overcome before we reach this technological utopia. As businesses scramble for their share of a $70 billion market, some will inevitably prosper and some will fail. Those that manage to succeed are likely to be those which can manage to see beyond the hype – and answer hard questions about how this technology can add real value and drive positive change.

What is WebGL and how does it work?


Web Graphics Library (WebGL) is a javaScript API for rendering 2D and 3D graphics within web browser without the use any plug-ins. It is derived from OpenGL's Embedded Systems (ES) 2.0 library which is a low-level 3D API for phones and other mobile devices. WebGL provides similar functionality of ES 2.0 and uses the HTML5 canvas element to performs well on modern 3D graphics hardware.

WebGL is written in a mix of JavaScript and shader code that is written in OpenGL Shading Language, a language similar to C or C++, and is executed on a computer's GPU.

In 2007, Vladimir Vukicevic, an American-Serbian software engineer started working on an OpenGL prototype for Canvas element of the HTML document. By the end of 2007, both Mozilla and Opera had made their own separate implementations. In early 2009, the non-profit technology consortium Khronos Group started the WebGL Working Group, with initial participation from Apple, Google, Mozilla, Opera, and others.

Version 1.0 was released in March 2011 and some early adopters and users of WebGL including Google Maps and Zygote Body. Autodesk also ported many of their applications to the cloud, running on local WebGL systems. Some of the browsers that support WebGL include Google Chrome, Mozilla Firefox, Internet Explorer, Opera, and Safari. It is also supported by a number of mobile browsers including Opera Mobile, WebOS, and MeeGo.

How does it work?

WebGL is slightly more complicated than your typical web technologies because it’s designed to work directly with your graphics card. To access WebGL content you need to have a browser that supports it. Also, having a good graphics card will likely improve WebGL performance on your computer. This is what allows it to rapidly do complex 3D rendering involving lots of calculations.

When programming in WebGL, you are usually aiming to render a scene of some kind. This usually includes multiple subsequent draw jobs or calls, each of which is carried out in the GPU through a process called the rendering pipeline.

In WebGL, like in most real-time 3D graphics, the triangle is the basic element with which models are drawn. Therefore, the process of drawing in WebGL involves using JavaScript to generate the information that specifies where and how these triangles will be created, and how they will look like; colour, shades, textures, etc. This information is then fed to the GPU, which processes it, and returns a view of the scene.

The key metaphor here is that of a pipeline. GPUs are massively parallel processors, consisting of a large number of computation units designed to work in parallel with each other, and in parallel with the CPU. That is true even in mobile devices. With that in mind, graphics APIs such as WebGL are designed to be inherently friendly to such parallel architectures. On typical work loads, and when correctly used, WebGL allows the GPU to execute graphics commands in parallel with any CPU-side work, i.e. the GPU and the CPU should not have to wait for each other, and WebGL allows the GPU to max out its parallel processing power. It is in order to allow running on the GPU that these shaders are written in a dedicated GPU-friendly language rather than in JavaScript. It is in order to allow the GPU to run many shaders simultaneously that shaders are just callbacks handling one vertex or one pixel each - so that the GPU is free to run shaders on whichever GPU execution unit and in whichever order it pleases.


In the recent years WebGL bring lot of change the world wide web with 3D graphics and browser games. Even bring the 3D world map in our browser, and with the latest stable release of WebGL 2 it put one step ahead. In future we will able to see some more interesting implement in WebGL.

What is HTTPS? How does it secure your browsing?

 HTTP stands for Hypertext Transfer Protocol. When you enter HTTP:// in your address bar in front of the domain, it tells the browser to connect over HTTP. HTTP uses TCP over port 80, to send and receive data packets over the web.

Now, HTTPS stands for Hypertext Transfer Protocol Secure. When you enter HTTPS:// in your address bar in front of the domain, it tells the browser to connect over HTTPS. HTTPS also uses TCP to send and receive data packets, but it does so over port 443, within a connection encrypted by Transport Layer Security. It uses a public key which is then decrypted on the recipient side. The public key is deployed on the server, and included in what you know as an SSL certificate. The certificates are cryptographically signed by a Certificate Authority (CA), and each browser has a list of CAs it implicitly trusts.

Good news. Your information is safe. The website you are working with has made sure that no one can steal your information. Using HTTPS, the computers agree on a "code" between them, and then they scramble the messages using that "code" so that no one in between can read them. This keeps your information safe from hackers.

HTTPS was actually created by Netscape Communications back in 1994 to use in its Netscape Navigator web browser. HTTPS originally used the SSL protocol which eventually evolved into TLS.

The SSL layer serves for two main purpose :
  • It is confirmed after using HTTPS that you are talking to server directly that you are thinking of.
  • It also ensures that only server reads the data you sent over network. No else can read it.

An SSL connection between client and server is establish by handshake which focuses on below things :
  • To make sure that client is talking to right server
  • Both parties have agreed on a 'cipher' which includes which encryption they will use to exchange data.
  • Both parties should agree key for this algorithm

As soon as connection is established, both parties can used agreed algorithm and keys to securely send messages to each other. 

What is Laravel?

Laravel is a open-source and powerful PHP MVC (Model View Controller) web framework, created by Taylor Otwell and intended for the development of web applications. Laravel has shaken up the PHP community in a big way - especially when you consider that version 1.0 of Laravel was only a couple of years ago. It has been generating a lot of buzz with the promise of making web applications fast and simple to create. Using Laravel, you can build and maintain high-quality web applications with minimal fuss.

Laravel is a prominent member of a new generation of web frameworks. So, what is we framework? Basically, a web framework makes it easier for you to develop your application. Most sites have a common set of functionality and a framework is something that prevents you from re-writing this each time you create a website.

It has a very rich set of features which will boost the speed of Web Development. If you familiar with Core PHP and Advanced PHP, Laravel will make your task easier. It will save a lot time if you are planning to develop a website from scratch. Not only that, the website built in Laravel is also secure. It prevents the various attacks that can take place on websites. 

Laravel comes with a lot of resources out of the box, it has a cool router, eloquent for model repositories, swiftmailer for the mailing, blade engine for your templates, a system to create your migrations, a cache component to cache everything you want, a monolog logger, etc.

Here is the some interesting features that make Laravel best -

  • Modular packaging system with a dedicated dependency manager
  • Eloquent ORM (object-relational mapping) is an advanced PHP implementation of the active record pattern 
  • Different ways for accessing relational databases though Routing
  • Reverse routing defines a relationship between the links and routes 
  • Orientation toward syntactic sugar
  • Utilities that aid in application deployment and maintenance
    easy authentication by providing a simple & easy to use interface
  • Automatic pagination simplifies the task of implementing pagination
  • IoC containers make it possible for new objects to be generated by following the inversion of control (IoC) principle
  • Blade templating engine combines one or more templates with a data model to produce resulting views etc.

Advantage of Laravel -

Laravel embraces a general development philosophy that sets a high priority on creating maintainable code. By following some simple guidelines, you should be able to keep a rapid pace of development and be free to change your code with little fear of breaking existing functionality. Laravel achieves this by adopting several proven web development patterns and best practices.
  • Single Responsibility Pattern
  • DRY (Don’t-Repeat-Yoursel)
  • Convention over configuration
  • Unit testing

REST API and how does it work

REST or Representational state transfer API defines a set of functions which developers can perform requests and receive responses via HTTP protocol such as GET and POST. It providing interoperability between computer systems on the Internet. REST-compliant Web services allow requesting systems to access and manipulate textual representations of Web resources using a uniform and predefined set of stateless operations. Other forms of Web service exist, which expose their own arbitrary sets of operations such as WSDL and SOAP.
The REST used by browsers. REST technology is generally preferred to the more robust SOAP technology because REST leverages less bandwidth, making it more suitable for internet usage. An API for a website is code that allows two software programs to communicate with each another. With cloud use on the rise, APIs are emerging to expose web services. REST is a logical choice for building APIs that allow users to connect and interact with cloud services. This APIs are commonly used by popular sites like Amazon, Google, LinkedIn and Twitter.
The REST architectural style describes six constraints. The six constraints are: 

  • Uniform Interface - The uniform constraint interface is fundamental to the design of any REST service. It simplifies and decouples the architecture, which enables each part to evolve independently.
  • Client-server - Separation of concerns is the principle behind the client-server constraints. By separating the user interface concerns from the data storage concerns, it improves the portability of the user interface across multiple platforms and improve scalability by simplifying the server components.
  • Stateless - The client–server communication is constrained by no client context being stored on the server between requests. Each request from any client contains all the information necessary to service the request, and session state is held in the client. The session state can be transferred by the server to another service such as a database to maintain a persistent state for a period and allow authentication. The client begins sending requests when it is ready to make the transition to a new state.
  • Cacheable - As on the World Wide Web, clients and intermediaries can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable or not to prevent clients from reusing stale or inappropriate data in response to further requests.
  • Layered system - A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers may improve system scalability by enabling load balancing and by providing shared caches. They may also enforce security policies.
  • Code on demand - Servers can temporarily extend or customize the functionality of a client by transfering executable code.

How does it work? 

A REST API breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. This modularity provides developers with a lot of flexibility.

A REST API explicitly takes advantage of HTTP methodologies defined by the RFC 2616 protocol. They use GET to retrieve a resource; PUT to change the state of or update a resource, which can be an object, file or block; POST to create that resource; and DELETE to remove it.

With REST, networked components are a resource you request access to a black box. The presumption is that all calls are stateless. Because the calls are stateless, REST is useful in cloud applications. Stateless components can be freely redeployed if something fails, and they can scale to accommodate load changes. This is because any request can be directed to any instance of a component; there can be nothing saved that has to be remembered by the next transaction. That makes REST preferred for web use, but the RESTful model is also helpful in cloud services.