This site uses cookies. To find out more, see our Cookies Policy

Big Data Developer in Jersey City, NJ at Verisk Analytics

Date Posted: 8/29/2018

Job Snapshot

  • Employee Type:
    Full-Time
  • Job Type:
  • Experience:
    6 to 8 years
  • Date Posted:
    8/29/2018

Job Description


 

If you’re looking for a career that transforms, inspires, challenges, and rewards you, then come join us! Verisk Analytics is a global supplier of risk assessment services and decision analytics for customers in a variety of markets, including insurance, healthcare, financial services, supply chain, and others. We’re a thriving public company with solid revenue growth and earnings and offices worldwide. And we’re continually looking for ways to augment our existing markets and expand into new markets with excellent growth potential. At Verisk, you’ll be part of an organization that’s committed to serving the long-term interests of our stakeholders, including the communities where we operate.

Position Summary:

Information Systems & Technology is seeking an experienced BigData/Application Developer to design applications in Big Data platform. The person will be a key part of the Application Development team, working on the latest Big Data technologies within the Hadoop ecosystem.

Delivery focus will be leveraging existing components such as HDFS, Hive, HBase, Pig, Flume, Spark and others that are in an incubation stage to build applications that solve business problems and building extensions to the platform to enhance the capability of the platform.

Some of the key responsibilities will be developing applications on the data and visualization platform to transform and compute on data across multiple sets.

Key Skills: R, Python, Java, Scala, C++, Big Data, Apache Spark, AWS, Software Engineer.

Responsibilities:
  • Develop APIs
  • Provide architecture designs and support for the project.
  • Develop programs in Spark, Scala and other big data technologies
  • Define and Design Data Lakes.
  • Extensive Java programming for migrating projects from traditional Data Warehouse.
  • Design and implement data transfer framework for ETL into Data Lake.
  • Ability to learn new technologies and implement the required tools

Requirements:
  • Bachelors in Computer Science or related field
  • 6-8 years of strong Java Experience
  • 2-3 Years of Big Data development experience (Hands on working in Dev/Prod environments). Having MAPR experience is a plus
  • 2 + years of strong development track record using Agile development methodologies.
  • 3+ years of experience in developing and implementing large scale big data environments and workflows
  • 3+ years of experience in BI tools like Spotfire/tableau needed
  • Hands on Experience on Scala, Spark /python, Spark Grapx/Graphframes, Mlib distributed computing
  • Strong experience with scalable API, REST, Microservices development, and DevOPS tools cloud, hybrid Containers deployment.
  • Strong NOSQL, New SQL, MPP Databases, Modeling, programming SQL on Hadoop tools experience.
  • Knowledge of DW and migration strategy to Data lakes
  • Exposure to Analytics, ML, Data mining  applications Development and Streaming applications development

 We are offering an excellent compensation package. Our competitive benefits package includes full health care options, a 401(k) plan, and generous Paid-Time-Off.

If this opportunity looks exciting and challenging to you, please click Submit Now to apply.

#LI-MV1


#CB