Home
Jobs

Software Developer 3

4 - 6 years

25 - 27 Lacs

Posted:3 months ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

MS or BS in Computer Science or Equivalent 4-6+ years of relevant experience Key Responsibilities: Feature Engineering Pipelines Design and implement data pipelines for feature extraction, transformation, and enrichment at scale using tools like Apache Spark, Airflow, or Prefect. Collaborate with data scientists to automate and optimize feature engineering workflows. Develop real-time and batch processing pipelines to support machine learning and analytics workloads. Ensure pipelines are resilient, maintainable, and scalable to handle large volumes of structured and unstructured data. Secure Data Storage Management Architect and manage secure storage solutions leveraging relational (SQL) and non-relational (NoSQL) databases like Oracle, DynamoDB, and MongoDB. Implement encryption techniques, data masking, and role-based access controls (RBAC) to safeguard sensitive data. Establish data retention policies and backup strategies to ensure compliance with data privacy regulations. Track data lineage and manage metadata using tools like Apache Atlas or DataHub. Data Infrastructure Tools Build and manage scalable ETL/ELT pipelines to streamline data ingestion and transformation processes. Leverage cloud-based services (AWS, GCP, Azure) for secure storage and data processing. Integrate distributed systems like Hadoop and Kafka for high-volume data handling. Collaboration Leadership Partner with data science and analytics teams to understand feature engineering needs and translate them into technical solutions. Lead efforts to enhance the security and reliability of data pipelines and storage systems. Mentor junior engineers on best practices in data engineering and secure software design. Qualifications: Strong background in data structures, algorithms, and system design. Hands-on experience with feature engineering pipelines, ETL/ELT tools, and secure data storage solutions. Proficiency in programming languages like Python, Java, or Scala. In-depth knowledge of data privacy regulations and compliance requirements. Familiarity with distributed systems, real-time data processing, and cloud platforms. This role offers the chance to work on cutting-edge data engineering challenges, ensuring secure and efficient handling of large-scale data. If you are passionate about building feature-rich and secure platforms, we encourage you to apply. Key Responsibilities: Feature Engineering Pipelines Design and implement data pipelines for feature extraction, transformation, and enrichment at scale using tools like Apache Spark, Airflow, or Prefect. Collaborate with data scientists to automate and optimize feature engineering workflows. Develop real-time and batch processing pipelines to support machine learning and analytics workloads. Ensure pipelines are resilient, maintainable, and scalable to handle large volumes of structured and unstructured data. Secure Data Storage Management Architect and manage secure storage solutions leveraging relational (SQL) and non-relational (NoSQL) databases like Oracle, DynamoDB, and MongoDB. Implement encryption techniques, data masking, and role-based access controls (RBAC) to safeguard sensitive data. Establish data retention policies and backup strategies to ensure compliance with data privacy regulations. Track data lineage and manage metadata using tools like Apache Atlas or DataHub. Data Infrastructure Tools Build and manage scalable ETL/ELT pipelines to streamline data ingestion and transformation processes. Leverage cloud-based services (AWS, GCP, Azure) for secure storage and data processing. Integrate distributed systems like Hadoop and Kafka for high-volume data handling. Collaboration Leadership Partner with data science and analytics teams to understand feature engineering needs and translate them into technical solutions. Lead efforts to enhance the security and reliability of data pipelines and storage systems. Mentor junior engineers on best practices in data engineering and secure software design. Qualifications: Strong background in data structures, algorithms, and system design. Hands-on experience with feature engineering pipelines, ETL/ELT tools, and secure data storage solutions. Proficiency in programming languages like Python, Java, or Scala. In-depth knowledge of data privacy regulations and compliance requirements. Familiarity with distributed systems, real-time data processing, and cloud platforms. This role offers the chance to work on cutting-edge data engineering challenges, ensuring secure and efficient handling of large-scale data. If you are passionate about building feature-rich and secure platforms, we encourage you to apply.

Mock Interview

Practice Video Interview with JobPe AI

Start Computer Science Interview Now

My Connections Oracle

Download Chrome Extension (See your connection in the Oracle )

chrome image
Download Now
Oracle
Oracle

Information Technology

Redwood City

135,000 Employees

5543 Jobs

    Key People

  • Safra Catz

    CEO
  • Larry Ellison

    Co-Founder & CTO

RecommendedJobs for You

Bengaluru, Karnataka, India

Bengaluru, Karnataka, India

Chennai, Tamil Nadu, India

Bengaluru, Karnataka, India

Hyderabad, Telangana, India