- Introduction
- Approach
- Requirements
- Running the code
- Troubleshooting configuration issues
- Testing
- Author
Patents determine the exclusive right to produce, market, and sell a design. Patents can be extraordinarily profitable; for example, the field of pharmaceutical patents produces profits in the tens of billions for a single company in a year. In order to determine patent infringement or the landscape surrounding a patent, intensive patent reviews or free text searches are often used. In this project, I add another tool to the exploration of patent landscape by creating a pipeline that processes USPTO files into a front-end network of relationships between patents based on how patents cite each other.
- USTPO XML --> AWS S3
- Apache Spark processing of XML on EC2 Hadoop cluster
- Basic patent information to a Postgres database
- Relationship information generated from Postgres data stored in Neo4j
- Front End Visualization of network using neo4j
- Airflow orchestration of pipeline and weekly updates with new patent XML
Languages:
- Python 3.6
Technologies:
- spark
- PostgreSQL
- Neo4j
Third-Party Libraries:
- AWS CLI
- See the Requirements File for all python requirements.
aws configureConfigure a VPC with a security group and subnet.
I provisioned it with the AWS UI. Alternatively:
aws rds create-db-instance --db-instance-identifier $DBNAME --allocated-storage $STORAGE --db-instance-class $INSTANCE --engine postgres --master-username $USERNAME --master-user-password $PASSWORDbash ./src/bash/neo4j_setup.shMake sure to sign into the web UI and change the default password of "neo4j".
Follow the link:
https://$NEO4J_PUBLIC_DNS:7473/browser/
Much of the code in this project relies on an environment file. It will also be distriubted to the cluster so the cluster knows the RDS and Neo4J server information. Fill in the .env_template file and rename it locally to .env.
source .env
aws ec2 run-instances --image-id ami-04169656fea786776 --count 1 --instance-type t2.micro --key-name $KEYPAIR --security-group-ids $SECURITY_GROUP --subnet-id $SUBNET --query 'Instances[0].InstanceId'SSH into the ec2 instance and run:
bash ./src/bash/download_patents.shStart a cluster using the open-source tool Pegasus. Configure the master and workers yaml files under ./vars/spark_cluster. Ex. the master file:
purchase_type: on_demand
subnet_id: subnet-XXXX
num_instances: 1
key_name: XXXXX-keypair
security_group_ids: sg-XXXXX
instance_type: m4.large
tag_name: spark-cluster
vol_size: 100
role: master
use_eips: trueThen start the cluster:
bash ./src/bash/provision_cluster.shSSH into the master:
peg ssh spark-cluster 1If you will close your ssh connection during runtime, consider using screen:
screenHINT: Use Ctrl + a + d to detach and leave the session running.
This repository's code can be run with:
bash ./run.shAfter the spark job has finished, there is one additional step to load data into Neo4J. SSH into the Neo4J ec2 instance, then on the local machine run:
bash ./src/bash/on_neo4j.shFinally, enable airflow after the initial data is in RDS and Neo4J. On the master:
bash ./src/bash/run_airflow.shIn the vars folder, there are configuration files for spark, hadoop, and airflow. If there is a configuration error, please consult them for potential differences with your setup.
Tests can be run by running with:
bash ./run_tests.shCreated by Stephen J. Wilson