Jobs
Interviews

6093 Scala Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Mumbai

Work from Office

Overview: We are seeking an experienced and passionate Solution Architect Databricks to join our team. With a strong background in data engineering, cloud platforms, and solution architecture, you will play a critical role in designing scalable, high-performance data solutions leveraging Databricks. You ll work closely with clients, business stakeholders, and cross-functional teams to understand business needs and translate them into effective technical solutions. Key Responsibilities: Design and architect scalable data platforms and pipelines using Databricks Lakehouse Platform . Lead the development and implementation of ETL/ELT workflows , data models, and analytics solutions. Collaborate with clients and internal teams to gather requirements and create technical architecture blueprints . Provide guidance on best practices for Spark optimization , data ingestion, transformation, and governance on Databricks. Integrate Databricks with Azure/AWS/GCP and other enterprise tools such as Power BI, Tableau, Kafka, Delta Lake, etc. Conduct technical workshops , POCs (Proof-of-Concepts), and solution demos for stakeholders. Ensure security, performance, and cost-efficiency of the data solutions. Provide mentorship to junior engineers and architects, promoting knowledge-sharing across the team. Required Skills & Experience: 8+ years of experience in data engineering, analytics, or solution architecture roles. 3+ years hands-on experience with Databricks and Apache Spark. Strong understanding of big data architectures , data lakes, and lakehouses. Proficiency in Python, SQL , and optionally Scala . Experience with cloud platforms (Azure, AWS, or GCP) especially with services like Azure Data Lake, AWS S3, or GCP BigQuery. Familiarity with data governance, security practices, and compliance standards. Experience working with Delta Lake, MLflow , or Unity Catalog is a plus. Strong communication skills with the ability to explain technical concepts to non-technical audiences.

Posted 1 week ago

Apply

7.0 - 10.0 years

16 - 20 Lacs

Bengaluru

Work from Office

Job Title: IBU Solution Services AI Senior Engineer Job Function: Analytics and Data Sciences Location: Bangalore, India Hiring Manager: Shubhra Verma Role Level: 7 to 10 years Detailed Description: Responsibilities: Lead the design, development, and deployment of advanced machine learning models and algorithms for various applications. Perform comprehensive data analysis, feature engineering, and model training with large and complex datasets. Collaborate with cross-functional teams to understand business requirements and translate them into sophisticated technical solutions. Architect and deploy scalable AI/ML models, including large language models (LLMs) and transformer-based architectures. Implement MLOps best practices for CI/CD, automated model retraining, and lifecycle management. Optimize AI/ML pipelines for distributed computing, leveraging cloud platforms and accelerators (GPUs/TPUs). Develop and maintain scalable advanced AI solutions such as retrieval-augmented generation (RAG) and fine-tuning techniques (LoRA, PEFT). . Conduct thorough model evaluation, validation, and testing to ensure high performance and accuracy. Stay at the forefront of AI/ML advancements and integrate cutting-edge techniques into existing and new projects. Mentor and provide technical guidance to junior developers and team members. Author and maintain detailed documentation of processes, methodologies, and best practices. Implement and optimize cutting-edge deep learning models, such as Generative Adversarial Networks (GANs) and transformer architectures (e.g., BERT, GPT). Explore and implement federated learning approaches to build AI/ML models while preserving patient data privacy and security. Integrate explainable AI (XAI) techniques to ensure transparency and interpretability of machine learning models. Translate complex AI concepts into actionable insights for business stakeholders. Required Qualifications: Bachelors, Masters in Computer Science, AI, ML, Data Science, or related fields. 7+ years of experience with AI / ML solutions Extensive experience in developing and deploying machine learning models in a production environment. Expertise in Python (NumPy, Pandas, scikit-learn), and proficiency in Java, Scala, or similar languages. Deep understanding of deep learning frameworks (TensorFlow, PyTorch, JAX) and NLP libraries (Hugging Face Transformers). Hands-on experience with MLOps, including containerization (Docker, Kubernetes), CI/CD, and model monitoring. Well versed with Agentic AI. Deep understanding on Langchain ecosystem. Experience with distributed computing (Apache Spark, Ray) and large-scale dataset processing. Proficiency in vector databases (FAISS, Chroma DB, Pinecone) for efficient similarity search. Strong expertise in fine-tuning transformer models, hyperparameter optimization, and reinforcement learning. In-depth experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying AI solutions. Strong analytical, problem-solving, and communication skills to bridge technical and business perspectives. Preferred Qualifications : Experience in regulated industries such as healthcare, biopharma. Contributions to AI research, open-source projects, or top-tier AI conferences. Familiarity with generative AI, prompt engineering, and advanced AI topics like graph neural networks. Proven ability to scale AI teams and lead complex AI projects in high-growth environments . .

Posted 1 week ago

Apply

7.0 - 12.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Job Description - Senior Software Engineer, ML As Relyance AI s ML/NLP Engineer, you will strategize, drive, and execute on the initiatives in NLP for information extraction from legal documents, ML/NLP for information extraction from code and general ML in code analysis, as well as overall AI backend initiatives. You will partner with cross-functional stakeholders to design and build flexible, powerful, and robust features that scale the impact of AI for our customers. You will work with Large Language Models like GPT-4 and LLaMA 2, as well as models of the scale of T5 or BERT. You will create novel model architectures and ML/NLP techniques, build data curation and model training workflows, and perform error analysis to drive feature development. As a senior engineer, you will be a core member of the team building a system with complex data and nature of predictions that rapidly evolve over time. You will need to pay close attention to detail, anticipate and welcome constant change, maintain a forward-thinking outlook, all while being fast and scrappy to address present needs. As a Senior ML/NLP Software Engineer, your role will include: Strategy: using your experience and understanding of how ML and NLP features are built to achieve state of the art results in real products, you will generate data-driven insights on how to evolve capabilities of Relyance AI Execution: create practical ML and NLP solutions, making customer-centric prioritization decisions to balance between immediate impact and long-term bets; partner, align, and collaborate with other engineering teams to implement features end to end, in particular throughout data engineering systems such as Airflow, VertexAI, BigQuery, etc.; or the Relyance backend Design: deeply understand how everything fits together; architect systems to balance scrappiness for the current needs with a forward-thinking outlook to incorporate state of the art NLP and ML techniques; continuously look for opportunities to automate and build tools to lower operational barriers Being a key member of the team solving its most complex problems with the simple, pragmatic solutions This role could be a fit for you if you bring: 7+ years of experience with a track record of being a key member of teams building ML and NLP solutions; or a PhD in relevant field preferably with industry experience Expert level proficiency in languages like Python, Java, C#, C++, Scala, etc. Strong data structures, algorithms, and OO software design and implementation skills Ability to learn and operate across full stack, from ML and NLP systems, to cloud infrastructure, to AI backend Experience as a creative and strategic thinker with mindset to build powerful, robust, and flexible solutions A get stuff done attitude and enjoy being hands-on and working alongside the team to solve its most pressing problems in a fast-paced, collaborative environment A track record of successfully influencing product direction through a strong perspective that motivates engineers to develop simple, pragmatic solutions to complex problems Skills in communicating with clear and concise, active listening and empathy skills, and a respectful, collaborative approach that earns the trust of your peers Bonus points for: Experience with Information Extraction, Semantic Parsing, practical application of LLMs Experience with a privacy technology Startup experience An advanced technical degree Working at Relyance AI At Relyance AI, we create an unreasonably hospitable and data-driven culture. We prioritize exceeding customer, and each other s, expectations in every interaction. This means empowered team members solving problems proactively based on information, crafting personalized experiences, and radiating enthusiasm. Behind the scenes, trust and freedom allow team members to find creative solutions, while shared purpose and recognition fuel a spirit of greatness to truly wow customers and each other. We deconstruct failures to learn from them and take great pride in our successes; celebrating both. Relyance AI is proud to be an equal-opportunity employer. We celebrate representation and are committed to creating an inclusive environment for all employees. We are committed to fair and equitable compensation practices. We use data-driven pay practices with the goal of ensuring offerings are competitive to the market and our team members are being compensated correctly based on their roles, experience, and location.

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Design, develop, and maintain high-quality software solutions. Collaborate with cross-functional teams to define, design, and ship new features. Strong Programming knowledge includes design patterns and debugging Java or Scala Design and Implement Data Engineering Frameworks on HDFS, Spark and EMR Implement and manage Kafka Streaming and containerized microservices. Work with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Utilize AWS Cloud services such as S3, EFS, MSK, ECS, EMR, etc. Ensure the performance, quality, and responsiveness of applications. Troubleshoot and resolve software defects and issues. Write clean, maintainable, and efficient code. Participate in code reviews and contribute to team knowledge sharing. You will be reporting to a Senior Manager This role would require you to work from Hyderabad (Workplace) for Hybrid 2 days a week from Office About Experian Experience and Skills 5+ years experienced engineer with hands-on and strong coding skills, preferably with Scala and java. Experience with Data Engineering BigData, EMR, Airflow, Spark, Athena. AWS Cloud experience S3, EFS, MSK, ECS, EMR, etc. Experience with Kafka Streaming and containerized microservices. Knowledge and experience with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Benefits Experian care for employees work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Find out what its like to work for Experian by clicking here

Posted 1 week ago

Apply

9.0 - 15.0 years

7 - 11 Lacs

Pune, Chennai, Bengaluru

Work from Office

Skill: Scala Grade -C2/D1 Location: Pune/Chennai & Bangalore NP: Immediate to 15 Days Joiners Only Strong analytical skills with experience of previously working within an area of Risk, Finance and Treasury. Good experience on SCALA and Apache Spark open-source data analytics cluster computing framework. Experience in working with different file formats like JSON, Parquet, AVRO, ORC and XML. Excellent inter-personal skills with experience of briefing, de-briefing and presenting to senior executives and having effective listening skills. Able to communicate effectively, both orally and in writing, with clients, colleagues and external vendors. Excellent time management and planning skills with experience of working under pressure. Ability to remain organized and able to prioritize multiple incident priorities. Highest standards of personal integrity, professional conduct and ethics. Incident, problem and change management skills Minimum qualification should be BE/BTech or Equivalent Scala, Spark

Posted 1 week ago

Apply

9.0 - 15.0 years

6 - 10 Lacs

Gurugram

Work from Office

Skill: Scala with AWS Grade -C2 Location: Pan India NP: Immediate to 15 Days Joiners Only We are seeking a skilled Big Data Engineer with strong hands-on experience in Apache Spark, Scala, and AWS cloud services. The ideal candidate will be responsible for developing scalable data pipelines, processing large datasets, and deploying efficient Spark-based solutions on the AWS ecosystem. Design and develop distributed data processing pipelines using Apache Spark and Scala. Optimize Spark jobs for performance and scalability on AWS infrastructure. Integrate Spark applications with AWS services such as S3, EMR, Lambda, Glue, RDS, Athena, and Redshift. Write clean, reusable, and production-grade Scala code. Work with large-scale structured and unstructured data in real-time and batch modes. Ensure data quality, reliability, and consistency through monitoring and validations. Collaborate with data scientists, analysts, and business teams to understand requirements and deliver insights. Implement best practices for data engineering in a cloud-native environment. Required Skills: Strong programming skills in Scala 3+ years of hands-on experience with Apache Spark (RDD/DataFrame/SQL APIs) Experience working on AWS services like EMR, S3, Glue, Lambda, EC2, etc. Proficient in writing and optimizing complex Spark transformations and actions. Experience in working with large-scale data processing and distributed systems. Preferred Skills: Experience with Kafka, Airflow, or similar orchestration tools. Working knowledge of Python Experience with containerized deployments (Docker, Kubernetes). AWS certification (e.g., AWS Certified Data Analytics Specialty) is a plus. Aws, Scala, Spark

Posted 1 week ago

Apply

9.0 - 14.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Key Responsibilities Must have 9+ years of experience in Design, Development, Testing, and Deployment: Lead the creation of scalable Data & AI applications using best practices in software engineering such as automation, version control, and CI/CD. Develop and implement rigorous testing strategies to ensure application reliability and performance. Oversee deployment processes, addressing issues related to configuration, environment, or security. Engineering and Analytics: Translate Data & AI use case requirements into effective data models and pipelines, ensuring data integrity through statistical quality procedures and advanced AI techniques. API & Microservice Development: Architect and build secure, scalable microservices and APIs, ensuring broad usability, security, and adherence to best practices in documentation and version control. Platform Scalability & Optimization: Evaluate and select optimal technologies for cloud and on-premise deployments, implementing strategies for scalability, performance monitoring, and cost optimization. Knowledge of machine learning frameworks (TensorFlow, PyTorch, Keras) Understanding of MLOps (machine learning operations) and continuous integration/deployment (CI/CD) Familiarity with deployment tools (Docker, Kubernetes) Technologies: Demonstrate expertise with Data & AI technologies (e.g., Spark, , Databricks), programming languages (Java, Scala, SQL), API development patterns (e.g., HTTP/REST, GraphQL), and cloud platforms (Azure) Good to have skills: Technologies: Demonstrate expertise with Data & AI technologies (e.g.Kafka, , Snowflake), programming languages (Python, SQL), API development patterns (e.g., HTTP/REST, GraphQL). Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg Job ID R-74975 Date posted 07/15/2025

Posted 1 week ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

: The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsHadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Big Data Tester LocationPune (for Mastercard) Experience Level5-9 years Minimum Skill Set Required / Must Have Python PySpark Testing skills and best practices for data validation SQL (hands-on experience, especially with complex queries) and ETL Good to Have Unix Big Data: Hadoop, Spark, Kafka, NoSQL databases (MongoDB, Cassandra), Hive, etc. Data Warehouse: TraditionalOracle, Teradata, SQL Server Modern CloudAmazon Redshift, Google BigQuery, Snowflake AWS development experience (not mandatory, but beneficial) Best Fit Python + PySpark + Testing + SQL (hands-on) and ETL + Good to Have skills

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer - Scala : Preferred Skills: ===- Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Pune

Work from Office

0px> Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5. 00 billion in fiscal 2024. For more information, visit www. amdocs. com In one sentence Responsible for the design, development, modification, debugging and/or maintenance of software systems. Works on specific modules, applications or technologies, and deals with sophisticated assignments during the software development process. What will your job look like Be accountable for and own specific modules within an application and provide technical support and guidance during solution design for new requirements, problem resolution for critical / complex issues while ensuring code is maintainable, scalable and supportable. Present demos of the software products to partners and internal/external customers, using technical knowledge to influence the direction and evolution of the product/solution. Investigate issues by reviewing/debugging code and providing fixes (analyzes and fixes bugs) and workarounds, will review changes for operability to maintain existing software solutions, will highlight risks and will help mitigate risks from technical aspects. Bring continuous improvements/efficiencies to the software or business processes by utilizing software engineering tools and various innovative techniques, and reusing existing solutions. By means of automation, reduces design complexity, reduces time to response, and simplifies the client/end-user experience. Represent/lead discussions related to product/application/modules/team (for example, leads technical design reviews). Establishes relationships with internal customers/partners All you need is. . . Develop and maintain search functionality in the Fusion Lucidworks platform. Experience - 3 years to 6 years Connect databases for pulling data into Fusion from various types of data sources. Implement real time indexing of large-scale data sets residing in database files and other sources, using Fusion as the search platform Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient in high-level programming languages: Java, Scala, Python. Familiarity with containerization, scripting, cloud platforms, and CI/CD. Work with Business analyst and customers to translate business needs into software solutions Have understanding of the software development process, version control, etc. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e. g. , Lucene Query Syntax) and experience with data indexing and retrieval. Familiarity with vector search techniques and embedding models (e. g. , BERT, Word2Vec). Familiarity with Search relevance tuning and search relevance algorithms. Understanding of API authentication methodologies (OAUTH2) Preferred Languages: Python, Java, JavaScript Why you will love this job: The chance to serve as a specialist in software and technology. You will take an active role in technical mentoring within the team. We provide stellar benefits from health to dental to paid time off and parental leave!

Posted 1 week ago

Apply

5.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional Requirements: PythonPySparkETLData PipelineBig DataAWSGCPAzureData WarehousingSparkHadoop Preferred Skills: Technology-Big Data-Big Data - ALL

Posted 1 week ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BSc Service Line Data & Analytics Unit Responsibilities Consulting Skills: oHypothesis-driven problem solvingoGo-to-market pricing and revenue growth executionoAdvisory, Presentation, Data StorytellingoProject Leadership and Execution Additional Responsibilities: Typical Work Environment Collaborative work with cross-functional teams across sales, marketing, and product development Stakeholder Management, Team Handling Fast-paced environment with a focus on delivering timely insights to support business decisions Excellent problem-solving skills and ability to address complex technical challenges. Effective communication skills to collaborate with cross-functional teams and stakeholders. Potential to work on multiple projects simultaneously, prioritizing tasks based on business impact Qualification: Degree in Data Science, Computer Science with data science specialization Master’s in business administration and Analytics preferred Technical and Professional Requirements: Technical Skills: oProficiency in programming languages like Python and R for data manipulation and analysis oExpertise in machine learning algorithms and statistical modeling techniques oFamiliarity with data warehousing and data pipelines oExperience with data visualization tools like Tableau or Power BI oExperience in Cloud platforms (e.g., ADF, Data bricks, Azure) and their AI services. Preferred Skills: Technology-Big Data-Text Analytics

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 7 Lacs

Pune

Work from Office

*APPLICABLE ONLY FOR IMMEDIATE JOINERS* Job Title: Big Data Developer Location: Pune Work Type: Full-Time Experience Required: 3+ Years Position Overview: We are seeking a talented Software Development Engineer- Big Data with strong expertise in big data technologies. The ideal candidate will have hands-on experience with Apache Spark, Hadoop, AWS (Amazon Web Services), Kafka, NoSQL databases (e.g., MongoDB, Cassandra), Hive, Docker, Kubernetes, and proficiency in SQL and one or more of the coding languages such as Java, Scala, Python. This role requires a deep understanding of software development principles, distributed systems, and container orchestration. Responsibilities: Design, develop, and optimize scalable and efficient big data applications and pipelines using Apache Spark, Hadoop, and related frameworks. Implement data processing workflows and streaming solutions utilizing Apache Kafka for real-time data ingestion and processing. Develop and maintain ETL (Extract, Transform, Load) processes to support data integration and transformation tasks. Utilize NoSQL databases for data storage and retrieval needs, ensuring data consistency and performance. Containerize and orchestrate big data applications using Docker and Kubernetes for scalability and reliability. Collaborate closely with data scientists, analysts, and product teams to understand requirements and deliver robust data solutions. Optimize performance and efficiency of data processing workflows and applications, focusing on scalability and cost-effectiveness. Mentor and guide junior team members, participate in code reviews, and contribute to improving team processes and practices. Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related field. 2+ years of professional software development experience with a focus on big data technologies. Strong programming skills in Java/Scala/Python, and SQL for building scalable applications and data processing pipelines. Hands-on experience with Apache Spark, Hadoop, and related ecosystem tools is preferred. Solid understanding of NoSQL databases and distributed computing principles. Familiarity with real-time data streaming and messaging systems like Apache Kafka. Excellent problem-solving skills, analytical mindset, and attention to detail. Strong communication skills and ability to collaborate effectively in a team environment. Experience with Agile methodologies and DevOps practices is a plus. Hands on experience in containerization and orchestration technologies such as Docker and Kubernetes is a plus.

Posted 1 week ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Kochi

Work from Office

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops- Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Kochi

Work from Office

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc,Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Kochi

Work from Office

Role Overview Join us as a Software Developer on our team to design, develop, and maintain high load, big-data JVM applications. Collaborate with a cross-functional team to ensure efficient and reliable product delivery. Key Responsibilities Design, develop and maintain scalable, high-performance JVM applications. Translate business requirements into technical solutions. Write clean, efficient, and well-tested code. Conduct functional, unit and integration testing to ensure high-quality delivery. Optimize application performance and scalability. Troubleshoot and resolve production issues. Stay updated with the latest Java and backend trends. Required education Bachelor's Degree Required technical and professional expertise Qualifications 6+ years of hands-on experience in development. Solid understanding of OOP principles and design patterns. Proficiency in cloud-native platforms (AWS, GCP, Azure). Practical experience in developing for Kubernetes. Experience with big-data processing and analyticsKafka, Click house is a plus. Strong problem-solving and analytical skills. Effective communication and teamwork abilities. Preferred technical and professional experience Preferred Skills Experience in high-load data processing and distributed systems. Knowledge of microservices architecture. Familiarity with DevOps tools and practices (TDD, CI/CD, SCM). Hands-on experience with cloud observability and APM tools. Proficiency in RDBMS technologies (JDBC, SQL). Familiarity with NoSQL datastores (Elastic, Cassandra, S3) Expertise in Java web frameworks (Spring Boot, Quarkus, Dropwizard). Test-driven development (TDD) using JUnit or similar frameworks. Modern JVM languagesKotlin, Scala, Clojure Modern Java backend frameworks and libsReactor, Kafka Streams, JOOQ, cloud SDKs, Serverless

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Mumbai

Work from Office

The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Kochi

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Kochi

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies