Senior Engineer- Bigdata Engineer

About us:

As a Fortune 50 company with more than 350,000 team members worldwide, Target is an iconic brand and one of America's leading retailers.​

 

Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It’s how we care, grow, and win together

Target as a tech company? Absolutely. We’re the behind-the-scenes powerhouse that fuels Target’s passion and commitment to cutting-edge innovation. We anchor every facet of one of the world’s best-loved retailers with a strong technology framework that relies on the latest tools and technologies—and the brightest people—to deliver incredible value to guests online and in stores. Target Technology Services is on a mission to offer the systems, tools and support that guests and team members need and deserve. Our high-performing teams balance independence with collaboration, and we pride ourselves on being versatile, agile and creative. We drive industry-leading technologies in support of every angle of the business, and help ensure that Target operates smoothly, securely and reliably from the inside out.

Team:

Digital Placement and Allocation (DPA) is an intelligent system that recommends which item needs to be placed in which Target building and how many units of it across 1900 stores + 10 FCs. This system helps to solve inventory planning for entire digital business of Target. We are building big data analytics capabilities to uncover product, and operational insights. Analyzing various data sources and proposing solutions to strategic problems on periodic basis. 

About you:

  • 5+ years of software development experience

  • Practical work experience in Python or Scala language is mandatory

  • Must have 4+ years of working experience with Spark, Hadoop framework

  • Hive HQL, SQL experience is necessary. Should have worked on RDBMS systems like Postgres or Oracle

  • Good exposure to NoSql like Cassandra and streaming solutions like Kafka is needed

  • To gather and accumulate data from disparate sources, clean it, organize it, process it, and analyze it to extract valuable insights and information.

  • To identify new sources of data and develop methods to improve data mining, analysis, and reporting.

  • To write SQL queries to extract data from the data warehouse.

  • To create data definitions for new database files or alterations made to the already existing ones for analysis purposes.

  • To present the findings in reports (in table, chart, or graph format) to help the stakeholders in the decision-making process.

  • To develop relational databases for sourcing and collecting data.

  • To monitor the performance of data mining systems and fix issues, if any..

  • Stays current with new and evolving technologies via formal training and self-directed education

Job Information

Related vacancies