Usaully database contains tables having static columns to hold the data. However, there may be case when values of certain column are calculated by expression from other columns. In such there is no need of physical storage of data for those columns. Such columns are defined as virtual columns to calculate values on the fly.
Syntax for colum definition:
column_name [data_type] [GENERATED ALWAYS] AS (expression) [VIRTUAL]
DROP TABLE schema_name.table_name [CASCADE CONSTRAINTS | PURGE]
Schema_name : it is optional to provide schema name to which table is attached.
table_name : name of the table to be deleted
CASCADE CONSTRAINTS : If table is associated with another table through foreign key integrity constraint, this clause has to be specified while deletion. If specified, it will also drop foreign key constraints from other table in addition to specified one table. If not specified, database throws error for deletion in case table is having foreign key integrity constraints with other table.
PURGE : If PURGE is speficied, table is not moved to recycle bin instead it is deleted forever.
Suppose we have following “people” table in database.
Lets execute query to delete the “people” table.
drop table people purge;
As in screenshot, after executing query table is purged. If we try to do desc query for table it will fail.
Alter table query can be used to modify the schema of existing table.
Following is the query syntax
ALTER TABLE table_name action_;
table_name – Name of the table to modify schema of.
action_ : modification to do in table.
Following are possible modifications actions.
To add new column.
To Modify schema of table column
To rename the column
To delete existing column.
To rename the table.
To add new column
Suppose we have following table “employee” with schema definition as in below already in database.
lets execute following query to add column named “contactno”.
alter table employee add contactno number not null;
As in above screenshot it has added new column “CONTACTNO”.
To modify Schema of table column.
Suppose we want to modify schema of “contactno” column from number to varchar2(13);
Syntax
ALTER TABLE table_name MODIFY column_name column_datatype column_constraint.
Lets execute following query.
Alter table employee modify contactno varchar2(13);
As in above screenshot, contactno column type is changed from number to varchar2(13). Please note, We have not specified “not null” in above query since it is already not null & we dont intend to change column constraint.
To rename the column
Following is query syntax to do rename of column.
ALTER TABLE table_name RENAME COLUMN column_oldname TO column_newname;
suppose we have following table “employee” in database.
Suppose we want to change “contactno” column name to “personalcontactno” column.
we execute the following query.
alter table employee rename column contactno to personalcontactno;
as in screenshot, “contactno” column is renamed to “personalcontactno” after executing alter table query.
To delete existing column.
Syntax for delete column query to drop multiple columns.
ALTER TABLE table_name DROP (column_name1,column_name2…)
Syntax for delete column query to drop single column.
ALTER TABLE table_name DROP column column_name;
Suppose we have following table definition of “employee” in the database.
Suppose we want to delete the column “personalcontactno”, lets execute following query.
alter table employee drop column personalcontactno;
As seen in above screenshot, after executying the query, column “personalcontactno” has been removed from table definition.
To Rename the table.
Syntax to rename the table name.
ALTER TABLE table_oldname RENAME TO table_newname;
Suppose we have following table “employee” in the datatabase, that we want to rename to “people”.
Lets execute following query.
alter table employee rename to people;
As seen in above screenshot, after query execution “employee” table has been renamed to “people”. If we try to do desc query with old table name “employee” it will fail.
Table Columns having Identity constraints can be termed as Identity column. They can be used as primary key constraint for the table.
Following is syntax to declare Identity column constraint.
GENERATED [ ALWAYS | BY DEFAULT [ ON NULL ] ]
AS IDENTITY [ ( identity_options ) ]
GENERATED : It is mandatory keyword.
ALWAYS : It indicates that value for this column would be always generated by database. If we explicity try to set value for this column it would result in error.
BY DEFAULT : It indicates that value for this column are generated automatically only when no value is provided while inserting record into table.
BY DEFAULT ON NULL : It indicates that value for this column are generated only when no value provider or null value provided while inserting record into table.
AS IDENTITY : it is mandatory keywords.
identity_options : We can add multiple options to control generation of value.
START WITH initial_value – value generation will start from initial_value. If not provided, value generation starts from 1.
INCREMENT BY interval_value – Defines interval to generate value. Default interval is 1.
CACHE – This is used to improve performance of insertion queries. It indicates number of values to be generated in advanced.
schema_name – Name of the schema table belongs to. It is an optional.
table_name – Name of the table to be created
column_1 – Table may contain multiple columns, it represents name of column 1 and so on. Multiple column schema definitions are seperated by comma.
data_type – Oracle supports various data types, it represents data_type for current column.
column_constraint – Contraint definition for column, if any like primary key, not null etc
table_constraint – Constraints definitions for table.
EXAMPLE –
Lets create Employee column having columns id, firstName, lastName. Following is the sql query for the same.
create table Employee(
employee_id number generated by default as identity,
first_name varchar2(50) not null,
last_name varchar2(50) not null,
primary key(employee_id)
);
Here we have created table named “Employee” having following
Column employee_id – This is unique identifier column with schame type as Number. Column constraint “generated by default as identity” instructs database server to automatically generate values for this column when records are inserted.
Column first_name – This is column with type varchar2 having size 50. That means column can contain string literal with maximum number of characters 50. Also colum constraint “not null” says value for this column cannot be empty.
Colum last_name – same as first_name column
Constraint Primary Key – This is table level constraint to declare that column employee_id would contain unique values to identify each record uniquely.
Jackson Streaming provides set of Streaming APIs to read from or Write to JSON as string. It is streaming oriented hence takes less memory footprint, it is analogous to Stax Streaming API for XML operations.
We will see simple example to write JSON string into .json file. Then read .json for input JSON string to be converted to Java Map object.
App.java
package custom.jackson.learning;
import java.io.File;
import java.io.IOException;
import java.util.Map;
import com.fasterxml.jackson.core.JsonEncoding;
import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.databind.ObjectMapper;
public class App {
public static void main(String[] args) throws IOException {
//Write to JSON string to .json file
JsonFactory jsonFactory = new JsonFactory();
JsonGenerator jsonGenerator = jsonFactory.createGenerator(new File("Employee.json"), JsonEncoding.UTF8);
//start writing the JSON object
jsonGenerator.writeStartObject();
//"firstName":"Suresh"
jsonGenerator.writeStringField("firstName", "Suresh");
//"lastName":"Rana"
jsonGenerator.writeStringField("lastName", "Rana");
//"skills":["Java","PHP","SQL"]
jsonGenerator.writeFieldName("skills");
jsonGenerator.writeStartArray();
jsonGenerator.writeString("Java");
jsonGenerator.writeString("PHP");
jsonGenerator.writeString("SQL");
jsonGenerator.writeEndArray();
//"certified":true
jsonGenerator.writeBooleanField("certified", true);
//"age":21
jsonGenerator.writeNumberField("age", 21);
//end writing the JSON object
jsonGenerator.writeEndObject();
jsonGenerator.close();
System.out.println("JSON Writing to Employee.json completed.");
//read the JSON file as input and convert to Map object
System.out.println("Reading data from Employee.json file");
ObjectMapper mapper = new ObjectMapper();
Map employeeMap = mapper.readValue(new File("Employee.json"), Map.class);
System.out.println("Map created from Employee.json file ::"+employeeMap);
}
}
Jackson libraries are distributed under Apache License as open source. We will stick to Maven as build tool. These libraries are available on Maven repositories
Following dependencies, we will add into maven project POM.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>custom.validation</groupId>
<artifactId>user</artifactId>
<version>0.0.1-SNAPSHOT</version>
<description>This is hello world project for javax validation of beans</description>
<dependencies>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<version>2.0.1.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate.validator</groupId>
<artifactId>hibernate-validator</artifactId>
<version>6.0.13.Final</version>
</dependency>
<dependency>
<groupId>org.glassfish</groupId>
<artifactId>javax.el</artifactId>
<version>3.0.0</version>
</dependency>
</dependencies>
</project>
2. We will write Employee class with annotations for Bean validations. Bean Validation annotations are provided by javax.validation package.
Also each annotation in general can have value to validate against, validation failure message as input parameters.
Employee.Java
package user;
import javax.validation.constraints.AssertTrue;
import javax.validation.constraints.Email;
import javax.validation.constraints.Max;
import javax.validation.constraints.Min;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
public class Employee {
@NotNull(message="FirstName should not be null")
private String firstName;//to demonstrate NotNull validation annotation
@NotNull(message="LastName should not be null")
private String lastName;//to demonstrate NotNull validation annotation
@AssertTrue
private boolean isCerified;//to demonstrate Boolean True validation annotation
@Size(min=10, max=200, message="Address should not exceed 200 characters or should be more than 20 characters")
private String address;//to demonstrate size validation annotation
@Min(value=18, message="Age should be more than 18")
@Max(value=61, message="Age should be less than 61")
private int age;//to demonstrate Min/Max validation annotation
@Email(message = "Should have valid email id")
private String email;//to demonstrate Email validation annotation
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public boolean isCerified() {
return isCerified;
}
public void setCerified(boolean isCerified) {
this.isCerified = isCerified;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
}
3. To execute programmatic validations, javax.validation package provides Validator APIs to carry out validations. Some frameworks like hibernate, we do not need to explicitly trigger validation process, as it is internally taken care of by frameworks. To trigger it explicitly, we need to use as following.
In previous tutorial, we have set several paths hardcoded in buildConfig configuration file. However Ant provides capability to either declare those values as properties in configuration file, load values from resource file or pass resource key-value pairs from command line. We will use same buildConfig file to modify it.
1.Declare Values as Properties in configuration file:
We have cleanLogs target in buildConfig file, which is receiving directory path to delete files within. Lets declare that path as property value in beginning of path, like below