Jobs
Interviews

6304 Scala Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let me tell you about the role As a Senior Data Platform Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let Me Tell You About The Role As a Senior Data Tooling Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location : Pune About Team & About Role As a Senior Software Engineer(SSE) in the Continuous Product Delivery (CPD) team, you will play a key role in providing long-term stability and last-mile delight to our customers. You will lead a small team of engineers and work closely with the core engineering team and product and support organization. You will work across Rubrik releases on our data backup & management offering. You are expected to develop a strong understanding of our product and engineering architecture, such as our distributed job framework, data lifecycle management, filesystem, and metadata store. Within CPD, you will work closely with the Platform and Systems Engineering team at Rubrik. The mission of this team is to develop a highly reliable, secure, scalable, and performant software-defined platform that radically simplifies building, deploying, and managing physical and virtual appliances on-premise and in the cloud. Rubrik CPD - SEs are self-starters, driven, and can manage themselves. We believe in giving engineers responsibility, not tasks. Our goal is to motivate and challenge you to do your best work by empowering you to make your own decisions. To do that, we have a very transparent structure that gives people the freedom to exercise judgment, even in critical scenarios. This develops more capable engineers and keeps everyone engaged and happy, ultimately leading to customer delight. Key Responsibilities Ownership of features, including design, implementation, and testing Design and develop infrastructure services and processes for regularly performing Linux kernel and Ubuntu OS upgrades. Diagnose and resolve problems in complex customer environments Develop and maintain code written in Python and/or Scala, where required. Troubleshoot complex software problems in a timely and accurate manner. Collaborate with cross-functional teams to define, design, and ship new features. Write and maintain technical documentation for software systems and applications. Participate in code reviews and ensure adherence to coding standards. Continuously improve software quality through process improvement initiatives. Keep up-to-date with emerging trends in software development. About You BTech/MTech/PhD in Computer Science 6-10 years of software development experience on Linux, preferably in Platform/Systems/Kernel or Networking domain Strong fundamentals in data structures, algorithms, and distributed systems design Solid grasp of major Linux distributions, such as Ubuntu Strong background in Systems Programming Expertise in debugging and troubleshooting performance and system-level issues Good experience with performing Linux kernel upgrades or equivalent and kernel debugging Excellent troubleshooting, problem-solving, and analytical skills. Strong communication skills and ability to work in a team environment. Proficient in a scripting language and either C++, Java, or Scala Large distributed systems design and development experience is preferred Knowledge of Storage, Filesystems, or Data Protection technologies is a plus Join Us in Securing the World's Data Rubrik (NYSE: RBRK) is on a mission to secure the world’s data. With Zero Trust Data Security™, we help organizations achieve business resilience against cyberattacks, malicious insiders, and operational disruptions. Rubrik Security Cloud, powered by machine learning, secures data across enterprise, cloud, and SaaS applications. We help organizations uphold data integrity, deliver data availability that withstands adverse conditions, continuously monitor data risks and threats, and restore businesses with their data when infrastructure is attacked. Linkedin | X (formerly Twitter) | Instagram | Rubrik.com Inclusion @ Rubrik At Rubrik, we are dedicated to fostering a culture where people from all backgrounds are valued, feel they belong, and believe they can succeed. Our commitment to inclusion is at the heart of our mission to secure the world’s data. Our goal is to hire and promote the best talent, regardless of background. We continually review our hiring practices to ensure fairness and strive to create an environment where every employee has equal access to opportunities for growth and excellence. We believe in empowering everyone to bring their authentic selves to work and achieve their fullest potential. Our inclusion strategy focuses on three core areas of our business and culture: Our Company: We are committed to building a merit-based organization that offers equal access to growth and success for all employees globally. Your potential is limitless here. Our Culture: We strive to create an inclusive atmosphere where individuals from all backgrounds feel a strong sense of belonging, can thrive, and do their best work. Your contributions help us innovate and break boundaries. Our Communities: We are dedicated to expanding our engagement with the communities we operate in, creating opportunities for underrepresented talent and driving greater innovation for our clients. Your impact extends beyond Rubrik, contributing to safer and stronger communities. Equal Opportunity Employer/Veterans/Disabled Rubrik is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Rubrik provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Rubrik complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Federal law requires employers to provide reasonable accommodation to qualified individuals with disabilities. Please contact us at hr@rubrik.com if you require a reasonable accommodation to apply for a job or to perform your job. Examples of reasonable accommodation include making a change to the application process or work procedures, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment. EEO IS THE LAW NOTIFICATION OF EMPLOYEE RIGHTS UNDER FEDERAL LABOR LAWS

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

PFB the detailed JD: 🔹 Experience: 6 to 8+ years (Hands-on) 🔹 Location: Pune (WFO) 🔹 Notice Period: 0-30 Days Must Have: Proficiency in at least one of the following programming languages: Java/Scala/Python Good understanding of SQL Experience of development and deployment of at least one end-to-end data storage/processing pipeline Strong Experience in Spark development with batch and streaming Intermediate level expertise in HDFS and Hive Experience with Pyspark and Data Engineering ETL implementation and migration to spark Experience of working with Hadoop cluster Python, PySpark, Data Bricks developer with knowledge of cloud Experience with Kafka and Spark streaming (Dstream and Structured Streaming) Experience with using Jupyter notebooks or any other developer tool Experience with Airflow or other workflow engines Good communication skills and logical skills Good to Have Skills: Prior experience of writing Spark jobs using Java is highly appreciated Prior experience of working with Cloudera Data Platform (CDP) Hands-on experience with NoSQL databases like HBase, Cassandra, Elasticsearch, etc. Experience of using maven and git Agile scrum methodologies Flink and Kudu streaming Automation of workflows CI/CD Nifi streaming and transformation

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with This team is responsible for response and management of cyber incidents, applying an intelligence-led approach for identification, mitigation, and rapid response to safeguard bp on a global scale. By applying lessons learned and data analytics, they establish engineering principles and enhance the technology stack to continuously bolster bp’s cybersecurity posture. Let me tell you about the role We are looking for a Security Engineering Specialist who will support a team dedicated to enabling security experts and software engineers to write, deploy, integrate, and maintain security standards and develop secure applications and automations. You will advocate for and help ensure that cloud, infrastructure, and data teams adhere to secure policies, uncover vulnerabilities and provide remediation insights, and contribute to the adoption of secure practices. You will stay informed on industry and technology trends to strengthen bp’s security posture and contribute to a culture of excellence. What you will deliver Support development of and implement platform security standards, co-design schemas, ensure quality at the source of infrastructure build and configuration, and find opportunities to automate manual secure processes wherever possible. Work with business partners to implement security strategies and to coordinate remediation activities to ensure products safely meet business requirements. Contribute as a subject matter expert in at least one domain (cloud, infrastructure, or data). Provide hands-on support to teams on secure configuration and remediation strategies. Align strategy, processes, and decision-making across teams. Actively participate in a positive engagement and governance framework and contribute to an inclusive work environment with teams and collaborators including engineers, developers, product owners, product managers and portfolio managers. Evolve the security roadmap to meet anticipated future requirements and needs. Provide support to the squads and teams through technical guidance and by managing dependencies and risks. Create and articulate materials on how to embed and measure security on our cloud, infrastructure, or data environments. Contribute to mentoring and promote a culture of continuous development! What you will need to be successful (experience and qualifications) 3+ years of experience in security engineering or technical infrastructure roles. A minimum of 3 years of Cyber Security experience on one of the following areas: Cloud (AWS and Azure), Infrastructure (IAM, Network, endpoint, etc.), or Data (DLP, data lifecycle management, etc.). Deep and hands-on experience designing security architectures and solutions for reliable and scalable data infrastructure, cloud and data products in complex environments. Development experience in one or more object-oriented programming languages (e.g., Python, Scala, Java, C#) and/or development experience in one or more cloud environments (including AWS, Azure, Alibaba, etc.). Exposure/experience with full stack development. Experience with automation and scripting for security tasks (e.g., IaC, CI/CD integration) and security tooling (e.g., vulnerability scanners, CNAPP, Endpoint and/or DLP). Deep knowledge and hands-on experience in technologies across all data lifecycle stages. Foundational knowledge of security standards, industry laws, and regulations such as Payment Card Industry Data Security Standards (PCI-DSS), General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA) and Sarbanes-Oxley (SOX). Strong collaborator management and ability to influence teams through technical guidance. Continuous learning and improvement approach. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Even though the job is advertised as full time, please contact the hiring manager or the recruiter as flexible working arrangements may be considered. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Automation system digital security, Client Counseling, Conformance review, Digital Forensics, Incident management, incident investigation and response, Information Assurance, Information Security, Information security behaviour change, Intrusion detection and analysis, Legal and regulatory environment and compliance, Risk Management, Secure development, Security administration, Security architecture, Security evaluation and functionality testing, Solution Architecture, Stakeholder Management, Supplier security management, Technical specialism Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a skilled Data Engineer to join our growing data team in India. You will be responsible for designing, building, and maintaining scalable data infrastructure and pipelines that enable data-driven decision making across our organization and client projects. This role offers the opportunity to work with cutting-edge technologies and contribute to innovative data solutions for global clients. What you do Technical Skills Minimum 3+ years of experience in data engineering or related field Strong programming skills in Python and/or Scala/Java Experience with SQL and database technologies (PostgreSQL, MySQL, MongoDB) Hands-on experience with data processing frameworks: Apache Spark, Hadoop ecosystem Apache Kafka for streaming data Apache Airflow or similar workflow orchestration tools Knowledge of data warehouse concepts and technologies Experience with containerization (Docker, Kubernetes) Understanding of data modeling principles and best practices Cloud & Platform Experience Experience with at least one major cloud platform (AWS, Azure, or GCP) Familiarity with cloud-native data services: Data lakes, data warehouses, and analytics services Server less computing and event-driven architectures Identity and access management for data systems Knowledge of Infrastructure as Code (Terraform, CloudFormation, ARM templates) Data & Analytics Understanding of data governance and security principles Experience with data quality frameworks and monitoring Knowledge of dimensional modeling and data warehouse design Familiarity with business intelligence and analytics tools Understanding of data privacy regulations (GDPR, CCPA) Preferred Qualifications Advanced Technical Skills Experience with modern data stack tools (dbt, Fivetran, Snowflake, Databricks) Knowledge of machine learning pipelines and MLOps practices Experience with event-driven architectures and microservices Familiarity with data mesh and data fabric concepts Experience with graph databases (Neo4j, Amazon Neptune) Industry Experience Experience in digital agency or consulting environment Background in financial services, e-commerce, retail, or customer experience platforms Knowledge of marketing technology and customer data platforms Experience with real-time analytics and personalization systems Soft Skills Strong problem-solving and analytical thinking abilities Excellent communication skills for client-facing interactions Ability to work independently and manage multiple projects Adaptability to rapidly changing technology landscape Experience mentoring junior team members What we ask Data Infrastructure & Architecture Design and implement robust, scalable data architectures and pipelines Build and maintain ETL/ELT processes for batch and real-time data processing Develop data models and schemas optimized for analytics and reporting Ensure data quality, consistency, and reliability across all data systems Platform-Agnostic Development Work with multiple cloud platforms (AWS, Azure, GCP) based on client requirements Implement data solutions using various technologies and frameworks Adapt quickly to new tools and platforms as project needs evolve Maintain expertise across different cloud ecosystems and services Data Pipeline Development Create automated data ingestion pipelines from various sources (APIs, databases, files, streaming) Implement data transformation logic using modern data processing frameworks Build monitoring and alerting systems for data pipeline health Optimize pipeline performance and cost-efficiency Collaboration & Integration Work closely with data scientists, analysts, and business stakeholders Collaborate with DevOps teams to implement CI/CD for data pipelines Partner with client teams to understand data requirements and deliver solutions Participate in architecture reviews and technical decision-making What we offer You’ll join an international network of data professionals within our organisation. We support continuous development through our dedicated Academy. If you're looking to push the boundaries of innovation and creativity in a culture that values freedom and responsibility, we encourage you to apply. At Valtech, we’re here to engineer experiences that work and reach every single person. To do this, we are proactive about creating workplaces that work for every person at Valtech. Our goal is to create an equitable workplace which gives people from all backgrounds the support they need to thrive, grow and meet their goals (whatever they may be). You can find out more about what we’re doing to create a Valtech for everyone here. Please do not worry if you do not meet all of the criteria or if you have some gaps in your CV. We’d love to hear from you and see if you’re our next member of the Valtech team!

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Tasks Experience Experience in building and managing data pipelines. experience with development and operations of data pipelines in the cloud (Preferably Azure.) Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark Deep expertise in architecting and data pipelines in cloud using cloud native technologies. Good experience in both ETL and ELT Ingestion patterns Hands-on experience working on large volumes of data(Petabyte scale) with distributed compute frameworks. Good Understanding of container platforms Kubernetes and docker Excellent knowledge and experience with object-oriented programming Familiarity developing with RESTful API interfaces. Experience in markup languages such as JSON and YAML Proficient in relational database design and development Good knowledge on Data warehousing concepts Working experience with agile scrum methodology Technical Skills Strong skills in distributed cloud Data analytics platforms like Databricks, HD insight, EMR cluster etc. Strong in Programming Skills -Python/Java/R/Scala etc. Experience with stream-processing systems: Kafka, Apache Storm, Spark-Streaming, Apache Flink, etc. Hands-on working knowledge in cloud data lake stores like Azure Data Lake Storage. Data pipeline orchestration with Azure Data Factory, Amazon Data Pipeline Good Knowledge on File Formats like ORC, Parquet, Delta, Avro etc. Good Experience in using SQL and No-SQL. databases like MySQL, Elasticsearch, MongoDB, PostgreSQL and Cassandra running huge volumes of data Strong experience in networking and security measures Proficiency with CI/CD automation, and specifically with DevOps build and release pipelines Proficiency with Git, including branching/merging strategies, Pull Requests, and basic command line functions Strong experience in networking and security measures Good Data Modelling skills Job Responsibilities Cloud Analytics, Storage, security, resiliency and governance Building and maintaining the data architecture for data Engineering and data science projects Extract Transform and Load data from sources Systems to data lake or Datawarehouse leveraging combination of various IaaS or SaaS components Perform compute on huge volume of data using open-source projects like Databricks/spark or Hadoop Define table schema and quickly adapt with the pipeline Working with High volume unstructured and streaming datasets Responsible to manage NoSQL Databases on Cloud (AWS, Azure etc.) Architect solutions to migrate projects from On-premises to cloud Research, investigate and implement newer technologies to continually evolve security capabilities Identify valuable data sources and automate collection processes Implement adequate networking and security measures for the data pipeline Implement monitoring solution for the data pipeline Support the design, and implement data engineering solutions Maintain excellent documentation for understanding and accessing data storage Work independently as well as in teams to deliver transformative solutions to clients Be proactive and constantly pay attention to the scalability, performance and availability of our systems Establishes privacy/security hierarchy and regulates access Collaborate with engineering and product development teams Systematic problem-solving approach with strong communication skills and a sense of ownership and drive Qualifications Bachelors degree or Masters in Computer Science or relevant streams Any Relevant cloud data engineering certification

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Location: Noida, Uttar Pradesh, India (Hybrid/Remote options available) Employment Type: Full-time About Fusionpact Technologies At Fusionpact Technologies, we are at the forefront of leveraging a fusion of cutting-edge technologies to create impactful solutions that drive significant business value for our clients globally. Established in 2022, we specialize in Cloud Services, Artificial Intelligence, Software Development, ERP Solutions, and IT Consulting. Our passion lies in pushing the boundaries of what's possible with technologies like AI/ML, Blockchain, Reactive Architecture, and Cloud-Native solutions . We're a dynamic, agile, and innovation-driven company committed to delivering high-quality, scalable, and secure software that truly makes a difference. With a proven track record across 175+ projects, including innovative products like ForestTwin™ for carbon tech and the ISO Platform for compliance, we are dedicated to transforming businesses and making a brighter world. The Opportunity We're looking for a highly skilled and experienced Tech Lead to join our dynamic engineering team. In this pivotal role, you'll be instrumental in shaping our technical vision, driving the development of next-generation reactive and microservices-based applications, and fostering a culture of technical excellence within our agile development environment. You'll be a key player in designing and implementing robust, scalable, and resilient systems. Your expertise in architectural principles will be crucial in guiding the team and ensuring the successful deployment of high-quality software. We're seeking a leader who can not only leverage strong fundamental knowledge but also expertly integrate and utilize AI tools to deliver superior software solutions in a fast-paced, agile manner. If you thrive on technical challenges, enjoy mentoring, and are excited about the impact of AI on software development, Fusionpact Technologies is the place for you. Responsibilities Technical Leadership & Architecture Lead the design, development, and deployment of complex reactive and microservices-based applications, ensuring adherence to Fusionpact's best practices, architectural principles, and quality standards. Define and enforce coding standards, design patterns, and architectural guidelines across development teams to ensure consistency and maintainability. Conduct rigorous technical reviews and provide constructive feedback to ensure high-quality code, scalable solutions, and optimal performance. Mentor, coach, and guide development teams on advanced architectural concepts, reactive programming paradigms (e.g., Akka), and microservices best practices. Agile Development & AI Integration Drive agile development practices within your scrum team, working closely with the Scrum Master, DevOps, QA, Backend, and Frontend engineers to ensure efficient workflows and timely delivery. Champion the adoption and effective utilization of cutting-edge AI tools (e.g., Cursor AI, GitHub Copilot, or similar generative AI solutions) to enhance code quality, accelerate development cycles, and improve overall team efficiency. Proactively identify opportunities to leverage AI for tasks such as intelligent code generation, automated refactoring, advanced bug detection, and smart automated testing frameworks. Ensure the seamless and effective integration of AI-powered workflows into the existing development pipeline, continuously optimizing the software delivery lifecycle. Project Management & Quality Assurance Effectively manage and contribute to multiple projects simultaneously, consistently delivering superior quality output in line with project timelines and client expectations. Take ownership of the technical success of projects, from initial conception and architectural design to successful deployment and ongoing maintenance. Collaborate with product owners and stakeholders to translate complex business requirements into clear, actionable technical specifications. Ensure the delivery of highly performant, secure, maintainable, and resilient software solutions that meet Fusionpact's high standards. Team Collaboration & Mentorship Foster a collaborative, innovative, and inclusive team environment, encouraging knowledge sharing, continuous learning, and cross-functional synergy. Provide dedicated technical guidance, coaching, and mentorship to junior and mid-level engineers, helping them grow their skills and careers. Champion a culture of continuous learning, staying abreast of emerging technologies, industry trends, and innovative software development methodologies, and bringing these insights back to the team. Required Skills & Experience 8+ years of progressive experience in software development, with at least 3+ years in a Tech Lead or similar leadership role focused on complex distributed systems. Proven hands-on experience in designing, building, and deploying highly available, scalable, and resilient reactive and microservices-based applications. Deep understanding of modern architecture principles, design patterns (e.g., Domain-Driven Design, Event Sourcing, CQRS), and software development best practices. Strong hands-on experience with at least one major programming language extensively used in reactive/microservices development (e.g., Java, Kotlin, Go, or Scala). Strong fundamental knowledge and practical experience leveraging AI tools (e.g., Cursor AI, GitHub Copilot, Tabnine, or similar) to enhance development workflows, improve code quality, and accelerate delivery. Demonstrated ability to effectively manage and contribute to multiple projects simultaneously while maintaining superior quality output. Extensive experience working in a fast-paced, agile (Scrum, Kanban) environment and guiding cross-functional scrum teams (Scrum Master, DevOps, QA, Backend, Frontend). Solid understanding of DevOps principles, CI/CD pipelines, and automated deployment strategies. Excellent communication, interpersonal, and leadership skills, with the ability to articulate complex technical concepts to diverse audiences. Strong ethics and integrity, with a proven ability to thrive and lead effectively in a remote or hybrid work environment. Preferred Qualifications Hands-on experience with Scala and Akka for building reactive systems. Proficiency with cloud platforms such as AWS, Azure, or GCP, including experience with their relevant services for microservices deployment and management. In-depth experience with containerization technologies (Docker, Kubernetes) and orchestration. Familiarity with various data storage technologies (relational databases, NoSQL databases like Cassandra, MongoDB, Redis) and message queues (Kafka, RabbitMQ). Experience with performance tuning, monitoring, and troubleshooting distributed systems. Certifications in relevant cloud platforms or agile methodologies. If you are a passionate and experienced Tech Lead with a strong background in reactive and microservices architectures, a knack for leveraging AI to deliver exceptional software, and a commitment to fostering a high-performing team, we encourage you to apply and become a part of Fusionpact Technologies' innovative journey! Apply Now!

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Data is the new Oil !! Are you passionate about data? Does the prospect of dealing with massive volumes of data excite you? Do you want to build big-data solutions that process billions of records a day in a scalable fashion using AWS technologies? Do you want to create the next-generation tools for intuitive data access? If so, Amazon Finance Technology (FinTech) is for you! Amazon's Financial Technology team is looking for a passionate, results-oriented, inventive Data Engineers who can work on massively scalable and distributed systems. The candidate thrives in a fast-paced environment, understands how to deal with large sets of data and transactions and will help us deliver on a new generation of software, leveraging Amazon Web Services. The candidate is passionate about technology and wants to be involved with real business problems. Our platform serves Amazon's finance, tax and accounting functions across the globe. We are looking for an experienced Data Engineer to join the FinTech Tax teams that builds and operate technology for supporting Amazon's tax compliance and audits needs worldwide. Our teams are responsible for building a big-data platform to stream billions of transactions a day, process and to organize them into output required for tax compliance globally. With Amazon's data-driven culture, our platform will provide accurate, timely and actionable data for our customers. As a member of this team, your mission will be to design, develop, document and support massively scalable, distributed real time systems. Using Python, Java, Object oriented design patterns, distributed databases and other innovative storage techniques, you will build and deliver software systems that support complex and rapidly evolving business requirements. You will communicate your ideas effectively to achieve the right outcome for your team and customer. Your code, design, and implementation decisions will set a great example to other engineers. As a senior engineer, you will provide guidance and support for other engineers with industry best practices and direction. You will also have the opportunity to impact the technical decisions in the broader organisation as well as mentor other engineers in the team. Key job responsibilities This is an exciting opportunity for a seasoned Data Engineer to take on a pivotal role in the architecture, design, implementation, and deployment of large-scale, critical, and complex financial applications. You will push your design and architecture skills to the limit by owning all aspects of end-to-end solutions. Leveraging agile methodologies, you will iteratively build and deliver high-quality results in a fast-paced environment. With strong verbal and written communication abilities, self-motivation, and a collaborative mindset, you will work across Amazon engineering teams and business teams globally to plan, design, execute, and implement this new platform across multiple geographies. Throughout the project lifecycle, you will review requirements, design services that lay the foundation for the new technology platform, integrate with existing architectures, develop and test code (Python, Scala, Java), and deliver seamless implementations for Global Tax customers. In a hands-on role, you will manage day-to-day activities, participate in designs, design reviews, and code reviews with the engineering team. Utilizing AWS technologies such as EC2, RDS/DynamoDB/RedShift, S3, EMR, Glue and QuickSight you will build solutions. You will design and code technical solutions to deliver value to tax customers. Additionally, you will contribute to a suite of tools hosted on the AWS infrastructure, working with a variety of tools across the spectrum of the software development lifecycle. About The Team The FinTech International Tax Compliance (FIT Compliance) team oversees the Tax Data Warehouse platform, a large-scale data platform and reporting solution designed for indirect tax compliance across Amazon and other organizations. This platform enables businesses to adhere to mandatory tax regulations, drive data accuracy, and ensure audit readiness, providing consistent and reliable data to tax teams in the EMEA regions. As Amazon expands its operations in EMEA, the FIT Compliance team plays a crucial role in delivering mandatory tax reporting obligations, facilitating these launches. Furthermore, their charter encompasses building solutions to meet evolving tax legislation changes, audit requests, and technology requirements globally, such as International Recon and Digital Service Tax (DST). The team is also investing in building the next generation of strategic platforms such as Unified tax ledger(UTL) and Golden data Set(GDS). Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2883904

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Gurugram

Work from Office

Key Responsibilities: Build and Drive the vision for the data platform and prepare a roadmap across Freecharge businesses. Working with business & product teams to understand and define the scope for data platform requirements and owning the requirements from data engineering product manager Helping in architecting and building real time and batch processing infrastructure for customer segmentation, hyper-personalization, AIML use cases etc. Conceptualizing scalable and efficient solutions for business and technical problems Demonstrating an advanced level of understanding of the technology stack, understanding the relevant product metrics and how your products interact with various businesses. Experience in proactive assessment of customer sentiments and taking appropriate measures using analytics and big data Exposure in working with with highly technical engineering team Defining and monitoring the KPIs for data platform being used across the organization for better customer experience and merchant sentiments Preferred Qualifications: 3-5 years as data platform product manager with experience in building data platforms or leading data architecture revamp programs Experience in working with cross functional teams spanning across product, growth, marketing, data analytics and data engineering teams. Ability to build consensus among stakeholders including the leadership team. Ability to grasp technical concepts quickly and communicate to a non-technical audience effectively. Strong execution capability paired with ability to envision and articulate a clear path to get there. B Tech/M Tech in Computer Science/ IT with experience in Spark, Hadoop, Scala, Java, segmentation, linear algebra etc

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Are you interested in innovating to deliver a world-class level of service to Amazon’s Selling Partners? At Amazon International Seller Services our mission is to make Sellers successful on Amazon. We are the team in Amazon which has a charter to work with sellers from every country. This provides a great opportunity to get diverse experience and a worldwide view. In this role you will be part of our mission to improve the Customer and Selling Partner experience across EU expansion marketplaces. You will partner with Business leaders, Product managers, BI manager to help drive specific business goals and be the analytical engine in specific areas. You will help identify, track and manage Key Performance Indicators (KPIs), partner with the internal and external teams to identify root-causes and automate iterative workflows using Amazon's tools. The ideal candidate should have a proven ability to independently deliver complex analytics projects. This role has high leadership visibility and requires efficient communication with tech & non-tech stakeholders. To be successful in this role, you should be comfortable dealing with large and complex data sets, have expertise in SQL querying, excel, have experience building self-service dashboards and using visualization tools, while always applying analytical rigor to solve business problems. Should have excellent judgment, be passionate about high standards (is never satisfied with the status quo), deliver innovative solutions. Key job responsibilities Collaborate with Leaders, multiple Account Managers, Team Managers etc. to understand business requirements and to prioritize and deliver data and reporting independently. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs. Analyze key metrics to uncover trends and root causes of issues and build simplified solutions for account managers/product managers to consume. Apply Statistical and Machine Learning methods to specific business problems and data. Utilizing code (Python, R, Scala, etc.) for analyzing data and building statistical models. Lead deep dives working backwards from business hypotheses and anecdotes & build visualizations and automation to reduce iterative manual efforts. Automate workflows using Amazon tools to improve sales teams productivity. Collaborate with other analysts to adopt best practices. Continually upskill in new technologies and adopt them in day to day work Basic Qualifications Bachelor's degree in a quantitative field (e.g., Computer Science, Mathematics, Statistics, Finance). 5+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience. Experience of working with SQL and at least one data visualization tool (e.g., PowerBI, Tableau, Amazon QuickSight) in a business environment. Preferred Qualifications Proficiency in Python Knowledge of Java, JSON and Amazon Technologies: AWS S3, RS, Lambda Experience of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience as business analyst, data analyst or similar role Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2983323

Posted 3 weeks ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

Programming Languages: Python, Scala Machine Learning frameworks: Scikit Learn, Xgboost, Tensorflow, Keras, PyTorch, Spacy, Gensim, Stanford NLP, NLTK, Open CV, Spark MLlib, . Machine Learning Algorithms experience good to have Scheduling experience: Airflow Big Data/ Streaming/ Queues: Apache Spark, Apache Nifi, Apache Kafka, RabbitMQ any one of them Databases: MySQL, Mongo/Redis/Dynamo DB, Hive Source Control: GIT Cloud: AWS Build and Deployment: Jenkins, Docker, Docker Swarm, Kubernetes. BI tool: Quicksight(preferred) else any BI tool (Must have)

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 16 Lacs

Hyderabad

Remote

Job description As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Masters Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.

Posted 3 weeks ago

Apply

5.0 years

35 - 45 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Salary range: Rs 3500000 - Rs 4500000 (ie INR 35-45 LPA) Min Experience: 5 years Location: Bangalore JobType: full-time Requirements Roles & Responsibilities Develop and extend our backend platform, processing terabytes of data to deliver unique, personalized financial experiences Collaborate directly with tech-focused founding team members and IIT graduates with expertise in designing scalable and robust system architectures Design systems from scratch with scalability and security front of mind Demonstrate deep knowledge of design patterns in Java, DS, and algorithms Monitor and optimize MySQL database queries for peak performance Experience with tools like Scala, Kafka, Bigtable, and BigQuery is beneficial but not mandatory Mentor junior team members by providing regular feedback and conducting code reviews

Posted 3 weeks ago

Apply

8.0 years

30 - 38 Lacs

Gurgaon

Remote

Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: AWS Glue Catalog : 3 years (Required) Data Engineering : 6 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function : 3 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) Work Location: In person

Posted 3 weeks ago

Apply

3.0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen #NJP

Posted 3 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Collaborate with Product Owners and stakeholders to understand the business requirements. Good Experience in Apache Kafka, Python, Tableau and MSBI-SSIS ,SSRS. Kafka integration with Python and data loading process. Analyse data from key source systems and design suitable solutions that transform the data from source to target. Provide support to the stakeholders and scrum team throughout the development lifecycle and respond to any design queries Support testing and implementation and review solutions to ensure functional and data assurance requirements are met Passionate about data and delivering high quality data-led solutions Able to influence stakeholders, build strong business relationships and communicate in a clear, concise manner Experience working with SQL or any big data technologies is a plus (Hadoop, Hive, Hbase, Scala, Spark etc) Good in Control-M, Git, CI & CD pipeline Good team player with a strong team ethos. Skills Required MSBI SSIS, MS SQL Server, Kafka, Airflow, ANSI SQL, ShellScript, Python, Scala, HDFS.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Chennai

Work from Office

Job Summary Synechron is seeking an experienced Data Processing Engineer to lead the development of large-scale data processing solutions using Java, Apache Flink/Storm/Beam, and Google Cloud Platform (GCP). In this role, you will collaborate across teams to design, develop, and optimize data-intensive applications that support strategic business objectives. Your expertise will help evolve our data architecture, improve processing efficiency, and ensure the delivery of reliable, scalable solutions in an Agile environment. Software Requirements Required: Java (version 8 or higher) Apache Flink, Storm, or Beam for streaming data processing Google Cloud Platform (GCP) services, especially BigQuery and related data tools Experience with databases such as BigQuery, Oracle, or equivalent Familiarity with version control tools such as Git Preferred: Cloud deployment experience with GCP in particular Additional familiarity with containerization (Docker/Kubernetes) Knowledge of CI/CD pipelines and DevOps practices Overall Responsibilities Collaborate closely with cross-functional teams to understand data and system requirements, then design scalable solutions aligned with business needs. Develop detailed technical specifications, implementation plans, and documentation for new features and enhancements. Implement, test, and deploy data processing applications using Java and Apache Flink/Storm/Beam within GCP environments. Conduct code reviews to ensure quality, security, and maintainability, supporting team members' growth and best practices. Troubleshoot technical issues, resolve bottlenecks, and optimize application performance and resource utilization. Stay current with advancements in data processing, cloud technology, and Java development to continuously improve solutions. Support testing teams to verify data workflows and validation processes, ensuring reliability and accuracy. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives to ensure continuous delivery and process improvement. Technical Skills (By Category) Programming Languages: Required: Java (8+) Preferred: Python, Scala, or Node.js for scripting or auxiliary processing Databases/Data Management: Experience with BigQuery, Oracle, or similar relational data stores Cloud Technologies: GCP (BigQuery, Cloud Storage, Dataflow etc.) with hands-on experience in cloud data solutions Frameworks and Libraries: Apache Flink, Storm, or Beam for stream processing Java SDKs, APIs, and data integration libraries Development Tools and Methodologies: Git, Jenkins, JIRA, and Agile/Scrum practices Familiarity with containerization (Docker, Kubernetes) is a plus Security and Compliance: Understanding of data security principles in cloud environments Experience Requirements 4+ years of experience in software development, with a focus on data processing and Java-based backend development Proven experience working with Apache Flink, Storm, or Beam in production environments Strong background in managing large data workflows and pipeline optimization Experience with GCP data services and cloud-native development Demonstrated success in Agile projects, including collaboration with cross-functional teams Previous leadership or mentorship experience is a plus Day-to-Day Activities Design, develop, and deploy scalable data processing applications in Java using Flink/Storm/Beam on GCP Collaborate with data engineers, analysts, and architects to translate business needs into technical solutions Conduct code reviews, optimize data pipelines, and troubleshoot system issues swiftly Document technical specifications, data schemas, and process workflows Participate actively in Agile ceremonies, provide updates on task progress, and suggest process improvements Support continuous integration and deployment of data applications Mentor junior team members, sharing best practices and technical insights Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or equivalent Relevant certifications in cloud technologies or data processing (preferred) Evidence of continuous professional development and staying current with industry trends Professional Competencies Strong analytical and problem-solving skills focused on data processing challenges Leadership abilities to guide, mentor, and develop team members Excellent communication skills for technical documentation and stakeholder engagement Adaptability to rapidly changing technologies and project priorities Capacity to prioritize tasks and manage time efficiently under tight deadlines Innovative mindset to leverage new tools and techniques for performance improvements

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru, Karnataka

Work from Office

Data Governance The team will be central data governance team for Kotak bank managing Metadata platforms, Data Privacy, Data Security, Data Stewardship and Data Quality platform. If youve right data skills and are ready for building data lake solutions from scratch for high concurrency systems involving multiple systems then this is the team for you. You day to day role will include Drive business decisions with technical input and lead the team. Design, implement, and support an data infrastructure from scratch. Manage AWS resources, including EC2, EMR, S3, Glue, Redshift, and MWAA. Extract, transform, and load data from various sources using SQL and AWS big data technologies. Explore and learn the latest AWS technologies to enhance capabilities and efficiency. Collaborate with data scientists and BI engineers to adopt best practices in reporting and analysis. Improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. Build data platforms, data pipelines, or data management and governance tools. BASIC QUALIFICATIONS for Data Engineer Bachelor's degree in Computer Science, Engineering, or a related field 3-5 years of experience in data engineering Strong understanding of AWS technologies, including S3, Redshift, Glue, and EMR Experience with data pipeline tools such as Airflow and Spark Experience with data modeling and data quality best practices Excellent problem-solving and analytical skills Strong communication and teamwork skills Experience in at least one modern scripting or programming language, such as Python, Java, or Scala Strong advanced SQL skills PREFERRED QUALIFICATIONS AWS cloud technologies: Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, IAM, Airflow Prior experience in Indian Banking segment and/or Fintech is desired. Experience with Non-relational databases and data stores Building and operating highly available, distributed data processing systems for large datasets Professional software engineering and best practices for the full software development life cycle Designing, developing, and implementing different types of data warehousing layers Leading the design, implementation, and successful delivery of large-scale, critical, or complex data solutions Building scalable data infrastructure and understanding distributed systems concepts SQL, ETL, and data modelling Ensuring the accuracy and availability of data to customers Proficient in at least one scripting or programming language for handling large volume data processing Strong presentation and communications skills. For Managers, Customer centricity, obsession for customer Ability to manage stakeholders (product owners, business stakeholders, cross function teams) to coach agile ways of working. Ability to structure, organize teams, and streamline communication. Prior work experience to execute large scale Data Engineering projects

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability to design, build and unit test the applications in Pyspark. Experience with Python development and Python data transformations. Experience with SQL scripting on one or more platforms Hive, Oracle, PostgreSQL, MySQL etc. In-depth knowledge of Hadoop, Spark, and similar frameworks. Strong knowledge of Data Management principles. Experience with normalizing/de-normalizing data structures, and developing tabular, dimensional and other data models. Have knowledge about YARN, cluster, executor, cluster configuration. Hands on working in different file formats like Json, parquet, csv etc. Experience with CLI on Linux-based platforms. Experience analysing current ETL/ELT processes, define and design new processes. Experience analysing business requirements in BI/Analytics context and designing data models to transform raw data into meaningful insights. Good to have knowledge on Data Visualization. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.

Posted 3 weeks ago

Apply

0 years

40 - 45 Lacs

Bengaluru, Karnataka, India

On-site

About Us We are on a mission to create India's largest fully automated financial inclusion organization, offering a range of financial services including micro-loans to serve the vast underserved middle/lower-income segment. Recognized as one of the Top 10 Google Launchpad-backed AI/ML Tech startups, you will experience firsthand challenges and opportunities to contribute towards building and scaling our business. Collaborate with brilliant minds driven by the goal of solving macro issues related to financial inclusion. Our services span over 17,000 pin codes in India, having positively impacted over 5.5 million users. Our user profile ranges from micro-entrepreneurs and small retailers to blue-grey-collar workers and salaried employees across various sectors. As part of our team, you'll manage Petabytes of data and contribute to organizational growth by deriving and applying data-driven insights, alongside opportunities to innovate and patent AI/ML technologies. What Can You Expect? Ownership of the company's success through ESOPs for high performers. Market-leading competitive salaries (in the 90th percentile). An open culture that encourages expressing opinions freely. Opportunities to learn from industry experts. A chance to positively impact billions of lives by enhancing financial inclusion. Be part of our journey to re-imagine solutions, delivering world-class, best-of-breed services to delight our customers and make a significant impact on the FinTech industry. Roles & Responsibilities Develop and extend our backend platform, processing terabytes of data to deliver unique, personalized financial experiences. Collaborate directly with tech-focused founding team members and IIT graduates with expertise in designing scalable and robust system architectures. Design systems from scratch with scalability and security front of mind. Demonstrate deep knowledge of design patterns in Java, DS, and algorithms. Monitor and optimize MySQL database queries for peak performance. Experience with tools like Scala, Kafka, Bigtable, and BigQuery is beneficial but not mandatory. Mentor junior team members by providing regular feedback and conducting code reviews. Skills: data structures,mysql,bigquery,bigtable,code,hld,ds,design patterns,kafka,scala,fintech,algorithms,java,architecture

Posted 3 weeks ago

Apply

1.0 - 3.0 years

9 - 13 Lacs

Pune

Work from Office

Overview We are hiring an Associate Data Engineer to support our core data pipeline development efforts and gain hands-on experience with industry-grade tools like PySpark, Databricks, and cloud-based data warehouses. The ideal candidate is curious, detail-oriented, and eager to learn from senior engineers while contributing to the development and operationalization of critical data workflows. Responsibilities Assist in the development and maintenance of ETL/ELT pipelines using PySpark and Databricks under senior guidance. Support data ingestion, validation, and transformation tasks across Rating Modernization and Regulatory programs. Collaborate with team members to gather requirements and document technical solutions. Perform unit testing, data quality checks , and process monitoring activities. Contribute to the creation of stored procedures, functions, and views . Support troubleshooting of pipeline errors and validation issues. Qualifications Bachelor’s degree in Computer Science, Engineering, or related discipline. 3+ years of experience in data engineering or internships in data/analytics teams. Working knowledge of Python, SQL , and ideally PySpark . Understanding of cloud data platforms (Databricks, BigQuery, Azure/GCP). Strong problem-solving skills and eagerness to learn distributed data processing. Good verbal and written communication skills. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 3 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurugram

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies