Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Solution Designer (Cloud Data Integration) at Barclays within the Customer Digital and Data Business Area, you will play a vital role in supporting the successful delivery of location strategy projects. Your responsibilities will include ensuring projects are delivered according to plan, budget, quality standards, and governance protocols. By spearheading the evolution of the digital landscape, you will drive innovation and excellence, utilizing cutting-edge technology to enhance our digital offerings and deliver unparalleled customer experiences. To excel in this role, you should possess hands-on experience working with large-scale data platforms and developing cloud solutions within the AWS data platform. Your track record should demonstrate a history of driving business success through your expertise in AWS, distributed computing paradigms, and designing data ingestion programs using technologies like Glue, Lambda, S3, Redshift, Snowflake, Apache Kafka, and Spark Streaming. Proficiency in Python, PySpark, SQL, and database management systems is essential, along with a strong understanding of data governance principles and tools. Additionally, valued skills for this role may include experience in multi-cloud solution design, data modeling, data governance frameworks, agile methodologies, project management tools, business analysis, and product ownership within a data analytics context. A basic understanding of the banking domain, along with excellent analytical, communication, and interpersonal skills, will be crucial for success in this position. Your main purpose as a Solution Designer will involve designing, developing, and implementing solutions to complex business problems by collaborating with stakeholders to understand their needs and requirements. You will be accountable for designing solutions that balance technology risks against business delivery, driving consistency and aligning with modern software engineering practices and automated delivery tooling. Furthermore, you will be expected to provide impact assessments, fault finding support, and architecture inputs required to comply with the bank's governance processes. As an Assistant Vice President in this role, you will be responsible for advising on decision-making processes, contributing to policy development, and ensuring operational effectiveness. If the position involves leadership responsibilities, you will lead a team to deliver impactful work and set objectives for employees while demonstrating leadership behaviours focused on listening, inspiring, aligning, and developing others. Alternatively, as an individual contributor, you will lead collaborative assignments, guide team members, identify new directions for projects, consult on complex issues, and collaborate with other areas to support business activities. All colleagues at Barclays are expected to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive. By demonstrating these values and mindset, you will contribute to creating an environment where colleagues can thrive and deliver consistently excellent results.,
Posted 14 hours ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
The role of Lead, Software Engineer at Mastercard involves playing a crucial part in the Data Unification process across different data assets to create a unified view of data from multiple sources. This position will focus on driving insights from available data sets and supporting the development of new data-driven cyber products, services, and actionable insights. The Lead, Software Engineer will collaborate with various teams such as Product Manager, Data Science, Platform Strategy, and Technology to understand data needs and requirements for delivering data solutions that bring business value. Key responsibilities of the Lead, Software Engineer include performing data ingestion, aggregation, and processing to derive relevant insights, manipulating and analyzing complex data from various sources, identifying innovative ideas and delivering proof of concepts, prototypes, and proposing new products and enhancements. Moreover, integrating and unifying new data assets to enhance customer value, analyzing transaction and product data to generate actionable recommendations for business growth, and collecting feedback from clients, development, product, and sales teams for new solutions are also part of the role. The ideal candidate for this position should have a good understanding of streaming technologies like Kafka and Spark Streaming, proficiency in programming languages such as Java, Scala, or Python, experience with Enterprise Business Intelligence Platform/Data platform, strong SQL and higher-level programming skills, knowledge of data mining and machine learning algorithms, and familiarity with data integration tools like ETL/ELT tools including Apache NiFi, Azure Data Factory, Pentaho, and Talend. Additionally, they should possess the ability to work in a fast-paced, deadline-driven environment, collaborate effectively with cross-functional teams, and articulate solution requirements for different groups within the organization. It is essential for all employees working at or on behalf of Mastercard to adhere to the organization's security policies and practices, ensure the confidentiality and integrity of accessed information, report any suspected information security violations or breaches, and complete all mandatory security trainings in accordance with Mastercard's guidelines. The Lead, Software Engineer role at Mastercard offers an exciting opportunity to contribute to the development of innovative data-driven solutions that drive business growth and enhance customer value proposition.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. We are counting on your unique voice and perspective to help EY become even better. Join us and build an exceptional experience for yourself, and a better working world for all. We are seeking a highly skilled and motivated Data Analyst with experience in ETL services to join our dynamic team. As a Data Analyst, you will be responsible for data requirement gathering, preparing data requirement artefacts, data integration strategies, data quality, data cleansing, optimizing data pipelines, and solutions that support business intelligence, analytics, and large-scale data processing. You will collaborate closely with data engineering teams to ensure seamless data flow across our systems. The role requires hands-on experience in the Financial Services domain with solid Data Management, Python, SQL & Advanced SQL development skills. You should have the ability to interact with data stakeholders and source teams to gather data requirements, understand, analyze, and interpret large datasets, prepare data dictionaries, source to target mapping, reporting requirements, and develop advanced programs for data extraction and analysis. Key Responsibilities: - Interact with data stakeholders and source teams to gather data requirements - Understand, analyze, and interpret large datasets - Prepare data dictionaries, source to target mapping, and reporting requirements - Develop advanced programs for data extraction and preparation - Discover, design, and develop analytical methods to support data processing - Perform data profiling manually or using profiling tools - Identify critical data elements and PII handling process/mandates - Collaborate with technology team to develop analytical models and validate results - Interface and communicate with onsite teams directly to understand requirements - Provide technical solutions as per business needs and best practices Required Skills and Qualifications: - BE/BTech/MTech/MCA with 3-7 years of industry experience in data analysis and management - Experience in finance data domains - Strong Python programming and data analysis skills - Strong advance SQL/PL SQL programming experience - In-depth experience in data management, data integration, ETL, data modeling, data mapping, data profiling, data quality, reporting, and testing Good To have: - Experience using Agile methodologies - Experience using cloud technologies such as AWS or Azure - Experience in Kafka, Apache Spark using SparkSQL and Spark Streaming or Apache Storm Other Key capabilities: - Client facing skills and proven ability in effective planning, executing, and problem-solving - Excellent communication, interpersonal, and teamworking skills - Multi-tasking attitude, flexible with ability to change priorities quickly - Methodical approach, logical thinking, and ability to plan work and meet deadlines - Accuracy and attention to details - Written and verbal communication skills - Willingness to travel to meet client needs - Ability to plan resource requirements from high-level specifications - Ability to quickly understand and learn new technology/features and inspire change within the team and client organization EY exists to build a better working world, helping to create long-term value for clients, people, and society, and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate across assurance, consulting, law, strategy, tax, and transactions. EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
BizViz is a company that offers a comprehensive view of a business's data, catering to various industries and meeting the diverse needs of business executives. With a dedicated team of over 50 professionals working on the BizViz platform for several years, the company aims to develop technological solutions that provide our clients with a competitive advantage. At BizViz, we are committed to the success of our customers, striving to create applications that align with their unique visions and requirements. We steer clear of generic ERP templates, offering businesses a more tailored solution. As a Big Data Engineer at BizViz, you will join a small, agile team of data engineers focused on building an innovative big data platform for enterprises dealing with critical data management and diverse application stakeholders at scale. The platform handles data ingestion, warehousing, and governance, allowing developers to create complex queries efficiently. With features like automatic scaling, elasticity, security, logging, and data provenance, our platform empowers developers to concentrate on algorithms rather than administrative tasks. We are seeking engineers who are eager for technical challenges, to enhance our current platform for existing clients and develop new capabilities for future customers. Key Responsibilities: - Work as a Senior Big Data Engineer within the Data Science Innovation team, collaborating closely with internal and external stakeholders throughout the development process. - Understand the needs of key stakeholders to enhance or create new solutions related to data and analytics. - Collaborate in a cross-functional, matrix organization, even in ambiguous situations. - Contribute to scalable solutions using large datasets alongside other data scientists. - Research innovative data solutions to address real market challenges. - Analyze data to provide fact-based recommendations for innovation projects. - Explore Big Data and other unstructured data sources to uncover new insights. - Partner with cross-functional teams to develop and execute business strategies. - Stay updated on advancements in data analytics, Big Data, predictive analytics, and technology. Qualifications: - BTech/MCA degree or higher. - Minimum 5 years of experience. - Proficiency in Java, Scala, Python. - Familiarity with Apache Spark, Hadoop, Hive, Spark SQL, Spark Streaming, Apache Kafka. - Knowledge of Predictive Algorithms, Mllib, Cassandra, RDMS (MYSQL, MS SQL, etc.), NOSQL, Columnar Databases, Big table. - Deep understanding of search engine technology, including Elasticsearch/Solr. - Experience in Agile development practices such as Scrum. - Strong problem-solving skills for designing algorithms related to data cleaning, mining, clustering, and pattern recognition. - Ability to work effectively in a matrix-driven organization under varying circumstances. - Desirable personal qualities: creativity, tenacity, curiosity, and a passion for technical excellence. Location: Bangalore To apply for this position, interested candidates can send their applications to careers@bdb.ai.,
Posted 2 days ago
10.0 - 14.0 years
20 - 30 Lacs
Noida, Pune, Bengaluru
Hybrid
Greetings from Infogain! We are having Immediate requirement for Big Data Engineer (Lead) position in Infogain India Pvt ltd. As a Big Data Engineer (Lead), you will be responsible for leading a team of big data engineers. You will work closely with clients and team members to understand their requirements and develop architectures that meet their needs. You will also be responsible for providing technical leadership and guidance to your team. Mode of Hiring-Permanent Skills : (Azure OR AWS) AND Apache Spark OR Hive OR Hadoop AND Spark Streaming OR Apache Flink OR Kafka AND NoSQL AND Shell OR Python. Exp: 10 to 14 years Location: Bangalore/Noida/Gurgaon/Pune/Mumbai/Kochi Notice period- Early joiner Educational Qualification: BE/BTech/MCA/M.tech Working Experience 12-15 years of broad experience of working with Enterprise IT applications in cloud platform and big data environments. Competencies & Personal Traits Work as a team player Excellent problem analysis skills Experience with at least one Cloud Infra provider (Azure/AWS) Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataframe API) or Hive query language (HQL) Experience in building streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Delta Lake Knowledge of NOSQL databases. Good to have experience in Cosmos DB, Restful APIs and GraphQL Knowledge of Big data ETL processing tools, Data modelling and Data mapping. Experience with Hive and Hadoop file formats (Avro / Parquet / ORC) Basic knowledge of scripting (shell / bash) Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops. Basic understanding of DevOps practices using Git version control Ability to debug, fine tune and optimize large scale data processing jobs Can share CV @ arti.sharma@infogain.com Total Exp Experience- Relevant Experience in Big data Relevant Exp in AWS OR Azure Cloud- Current CTC- Exp CTC- Current location - Ok for Bangalore location-
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
About the Team As a part of the DoorDash organization, you will be joining a data-driven team that values timely, accurate, and reliable data to make informed business and product decisions. Data serves as the foundation of DoorDash's success, and the Data Engineering team is responsible for building database solutions tailored to various use cases such as reporting, product analytics, marketing optimization, and financial reporting. By implementing robust data structures and data warehouse architecture, this team plays a crucial role in facilitating decision-making processes at DoorDash. Additionally, the team focuses on enhancing the developer experience by developing tools that support the organization's high-velocity demands. About the Role DoorDash is seeking a dedicated Data Engineering Manager to lead the development of enterprise-scale data solutions. In this role, you will serve as a technical expert on all aspects of data architecture, empowering data engineers, data scientists, and DoorDash partners. Your responsibilities will include fostering a culture of engineering excellence, enabling engineers to deliver reliable and flexible solutions at scale. Furthermore, you will be instrumental in building and nurturing a high-performing team, driving innovation and success in a dynamic and fast-paced environment. In this role, you will: - Lead and manage a team of data engineers, focusing on hiring, building, growing, and nurturing impactful business-focused data teams. - Drive the technical and strategic vision for embedded pods and foundational enablers to meet current and future scalability and interoperability needs. - Strive for continuous improvement of data architecture and development processes. - Balance quick wins with long-term strategy and engineering excellence, breaking down large systems into user-friendly data assets and reusable components. - Collaborate cross-functionally with stakeholders, external partners, and peer data leaders. - Utilize effective planning and execution tools to ensure short-term and long-term team and stakeholder success. - Prioritize reliability and quality as essential components of data solutions. Qualifications: - Bachelor's, Master's, or Ph.D. in Computer Science or equivalent field. - Over 10 years of experience in data engineering, data platform, or related domains. - Minimum of 2 years of hands-on management experience. - Strong communication and leadership skills, with a track record of hiring and growing teams in a fast-paced environment. - Proficiency in programming languages such as Python, Kotlin, and SQL. - Prior experience with technologies like Snowflake, Databricks, Spark, Trino, and Pinot. - Familiarity with the AWS ecosystem and large-scale batch/real-time ETL orchestration using tools like Airflow, Kafka, and Spark Streaming. - Knowledge of data lake file formats including Delta Lake, Apache Iceberg, Glue Catalog, and S3. - Proficiency in system design and experience with AI solutions in the data space. At DoorDash, we are dedicated to fostering a diverse and inclusive community within our company and beyond. We believe that innovation thrives in an environment where individuals from diverse backgrounds, experiences, and perspectives come together. We are committed to providing equal opportunities for all and creating an inclusive workplace where everyone can excel and contribute to our collective success.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. You will be responsible for developing and maintaining Java-based applications using the Spring framework. Additionally, you will design and implement batch processing solutions using Spark Batch for large-scale data processing and build real-time data pipelines using Spark Streaming for processing streaming data. Collaboration with cross-functional teams to define, design, and deliver new features will be a key aspect of your role. You will also be optimizing data processing workflows for performance, scalability, and reliability. Troubleshooting and resolving issues related to data processing, application performance, and system integration will be part of your responsibilities. Writing clean, maintainable, and well-documented code following best practices is essential. You will participate in code reviews, unit testing, and system testing to ensure quality deliverables. Staying updated with emerging technologies and proposing improvements to existing systems will be encouraged. Required Skills and Qualifications: - Education: Bachelors degree in Computer Science, Engineering, or a related field. - Experience: 2 to 5 years of professional experience in Java development. Technical Skills: - Strong proficiency in Java (version 8 or higher) and object-oriented programming. - Hands-on experience with Spring (Spring Boot, Spring MVC, or Spring Data) for building enterprise applications. - Expertise in Spark Batch for large-scale data processing and analytics. - Experience with Spark Streaming for real-time data processing and streaming pipelines. - Familiarity with distributed computing concepts and big data frameworks. - Proficiency with version control systems like Git. - Knowledge of build tools such as Maven or Gradle. - Understanding of Agile/Scrum methodologies. Soft Skills: - Strong problem-solving and analytical skills. - Excellent communication and teamwork abilities. - Ability to manage multiple priorities and work independently. Preferred Skills: - Experience with big data technologies like Hadoop, Kafka, or Hive. - Knowledge of containerization tools like Docker or Kubernetes. - Experience with CI/CD pipelines and tools like Jenkins. - Understanding of data storage solutions like HDFS. This job description provides a high-level overview of the responsibilities and required skills for the Applications Development Intermediate Programmer Analyst position. Other job-related duties may be assigned as required.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an AWS Big Data Engineer working in a remote location, you will be a crucial part of a company that provides enterprise-grade platforms designed to accelerate the adoption of Kubernetes and Data. Our flagship platform, Gravity, offers developers a simplified Kubernetes experience by handling all the underlying complexities. You will have the opportunity to utilize tailor-made workflows to deploy Microservices, Workers, Data, and MLOps workloads across multiple Cloud Providers. Gravity takes care of various Kubernetes-related orchestration tasks including cluster provisioning, workload deployments, configuration management, secret management, scaling, and provisioning of cloud services. Additionally, Gravity provides out-of-the-box Observability for workloads, enabling developers to quickly engage in Day 2 operations. Moreover, you will work with Dark Matter, a unified data platform that enables enterprises to extract value from their data lakes. Within this platform, Data Engineers and Data Analysts can easily discover datasets through an Augmented Data Catalog. The Data Profile, Data Quality, and Data Privacy functionalities are deeply integrated within the catalog, offering an immediate snapshot of datasets in Data Lakes. Organizations can maintain Data Quality by defining quality rules that automatically monitor Accuracy, Validity, and Consistency of data to meet their data governance standards. The built-in Data Privacy engine can identify sensitive data in data lakes and take automated actions, such as redactions, through an integrated Policy and Governance engine. Your responsibilities will include having a minimum of 5+ years of experience working with high-volume data infrastructure, proficiency in AWS and/or Databricks, Kubernetes, ETL, and Job orchestration tooling. You should have extensive programming experience in either Python or Java, along with skills in data modeling, optimizing SQL queries, and system performance tuning. It is essential to possess knowledge and proficiency in the latest open-source data frameworks, modern data platform tech stacks, and tools. You should be proficient in SQL, AWS, Databases, Apache Spark, Spark Streaming, EMR, Kubernetes, and Kinesis/Kafka. Your passion should lie in tackling messy unstructured data and transforming it into clean, usable data that contributes to a more organized world. Continuous learning and staying updated with the rapidly evolving data landscape should be a priority for you. Strong communication skills, the ability to work independently, and a degree in Computer Science, Software Engineering, Mathematics, or equivalent experience are necessary for this role. Additionally, the benefit of working from home will be provided as part of this position.,
Posted 3 days ago
3.0 - 8.0 years
5 - 9 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Interaction with diverse Business Partners and Technical Workgroups Be flexible to collaborate with onshore business, during US business hours Be flexible to support project releases, during US business hours Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience 3+ years of working experience in Python, Pyspark, Scala 3+ years of experience working on MS Sql Server and NoSQL DBs like Cassandra, etc. Hands-on working experience in Azure Databricks Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both verbal and written) Preferred Qualification: Experience in the Streaming application (Kafka, Spark Streaming, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyoneof every race, gender, sexuality, age, location and incomedeserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Gen #NJP
Posted 4 days ago
3.0 - 6.0 years
11 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Posted 4 days ago
3.0 - 8.0 years
4 - 8 Lacs
Pune
Work from Office
Required Skills and Competencies: - Experience: 3+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have).
Posted 4 days ago
4.0 - 9.0 years
4 - 8 Lacs
Pune
Work from Office
Experience: 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive: Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering:- Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As a skilled candidate for this position, you should possess a minimum of 8 to 10 years of experience in Java, REST API, and Spring Boot. Additionally, you must have hands-on experience with AngularJS, ReactJS, or VueJS. A bachelor's degree or higher in computer science, data science, or a related field is required. Your role will involve working with data cleaning, visualization, and reporting, requiring practical experience in these areas. Previous exposure to an agile environment is essential for success in this position. Your excellent analytical and problem-solving skills will be key assets in meeting the job requirements. In addition to the mandatory qualifications, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Ideally, you should have a minimum of 2 years of experience with real-time data stream platforms like Kafka and Spark Streaming. Your ability to navigate and utilize the context menu efficiently will also be beneficial in this role. Excellent communication and interpersonal skills will be necessary for effective collaboration within the team and with stakeholders.,
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Java Developer, you will be responsible for utilizing your 8 to 10 years of experience in Java, REST API, and Spring boot to develop efficient and scalable solutions. Your expertise in Angular JS, React JS, or View JS will be essential for creating dynamic and interactive user interfaces. A Bachelors degree or higher in computer science, data science, or a related field is required to ensure a strong foundation in software development. Your role will involve hands-on experience with data cleaning, visualization, and reporting, enabling you to contribute to data-driven decision-making processes. Working in an agile environment, you will apply your excellent analytical and problem-solving skills to address complex technical challenges effectively. Your communication and interpersonal skills will be crucial for collaborating with team members and stakeholders. Additionally, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Having at least 2 years of relevant experience with real-time data stream platforms like Kafka and Spark Streaming will further enhance your capabilities in building real-time data processing solutions. If you are a proactive and innovative Java Developer looking to work on cutting-edge technologies and contribute to impactful projects, this role offers an exciting opportunity for professional growth and development.,
Posted 6 days ago
4.0 - 9.0 years
10 - 20 Lacs
Pune, Chennai, Bengaluru
Hybrid
Experience: 4-10years Location - Pune, Bangalore, Chennai, Noida and Gurgaon Notice period should be Immediate to 30days only Mandatory skill - Apache Spark, JAVA programming Strong knowledge in Apache Spark framework Core Spark, Spark Data Frames, Spark streaming Hands-on experience in any one of the programming languages (Java) Good understanding of distributed programming concepts Experience in optimizing Spark DAG, and Hive queries on Tez Experience using tools like Git, Autosys, Bitbucket, Jira Ability to apply DWH principles within Hadoop environment and NoSQL databases.
Posted 6 days ago
6.0 - 11.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Technology Lead Analyst role is a senior position where you will be responsible for implementing new or updated application systems and programs in collaboration with the Technology team. Your main objective will be to lead applications systems analysis and programming activities. Your responsibilities will include partnering with various management teams to ensure the integration of functions to achieve goals, identifying necessary system enhancements for new products and process improvements, resolving high-impact problems/projects by evaluating complex business processes, providing expertise in applications programming, ensuring application design aligns with the architecture blueprint, developing standards for coding, testing, debugging, and implementation, gaining comprehensive knowledge of business areas integration, analyzing issues to develop innovative solutions, advising mid-level developers and analysts, assessing risks in business decisions, and being a team player who can adapt to changing priorities. The required skills for this role include strong knowledge in Spark using Java/Scala & Hadoop Ecosystem with hands-on experience in Spark Streaming, proficiency in Java Programming with experience in the Spring Boot framework, familiarity with database technologies such as Oracle, Starburst & Impala query engine, and knowledge of bank reconciliations tools like Smartstream TLM Recs Premium / Exceptor / Quickrec is an added advantage. To qualify for this position, you should have 10+ years of relevant experience in Apps Development or systems analysis role, extensive experience in system analysis and programming of software applications, experience in managing and implementing successful projects, be a Subject Matter Expert (SME) in at least one area of Applications Development, ability to adjust priorities quickly, demonstrated leadership and project management skills, clear and concise communication skills, experience in building/implementing reporting platforms, possess a Bachelor's degree/University degree or equivalent experience (Master's degree preferred). This job description is a summary of the work performed, and other job-related duties may be assigned as needed.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Big Data Engineer, you will be responsible for expanding and optimizing the data and database architecture, as well as optimizing data flow and collection for cross-functional teams. You should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. Your role will involve supporting software developers, database architects, data analysts, and data scientists on data initiatives, ensuring optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. You should have sound knowledge in Spark architecture and distributed computing, including Spark streaming. Proficiency in Spark, including RDD and DataFrames core functions, troubleshooting, and performance tuning is essential. A good understanding of object-oriented concepts and hands-on experience with Scala/Java/Kotlin, along with excellent programming logic and technique, is required. Additionally, experience in functional programming and OOPS concepts in Scala/Java/Kotlin is necessary. Your responsibilities will include managing a team of Associates and Senior Associates, ensuring proper utilization across projects, and mentoring new members for project onboarding. You should be able to understand client requirements, design, develop, and deliver solutions from scratch. Experience in AWS cloud would be preferable, along with the ability to analyze, re-architect, and re-platform on-premises data warehouses to data platforms on the cloud. Leading client calls to address delays, blockers, escalations, and requirements collation, managing project timing, client expectations, and meeting deadlines are key aspects of the role. Project and team management roles, facilitating meetings within the team regularly, understanding business requirements, analyzing different approaches, and planning deliverables and milestones for projects are also part of your responsibilities. Optimization, maintenance, and support of pipelines, strong analytical and logical skills, and the ability to tackle new challenges comfortably and learn are essential qualities for this role. The ideal candidate should have 4 to 7 years of relevant experience. Must-have skills for this position include Scala/Java/Kotlin, Spark, SQL (Intermediate to advanced level), Spark Streaming, any cloud platform (AWS preferred), Kafka/Kinesis/any streaming services, Object-Oriented Programming, Hive, and ETL/ELT design experience, as well as CICD experience for ETL pipeline deployment. Good-to-have skills include proficiency in Git or similar version control tools, knowledge in CI/CD, and Microservices.,
Posted 1 week ago
6.0 - 9.0 years
25 - 32 Lacs
Bangalore/Bengaluru
Work from Office
Full time with top German MNC for location Bangalore - Experience on SCALA/Java is a must Job Description As a Data engineer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable & data-driven applications. The data sets produced by these applications whether data streams or data at rest need to be highly available, reliable, consistent and quality-assured so that they can serve as input to wide range of other use cases and downstream applications. We run these applications on Azure databricks, you will be building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models. Primary responsibilities: • Be a key contributor to the Org hybrid cloud data platform (on-prem & cloud) Designing & building data pipelines on a global scale, ranging from small to huge datasets Design applications and data models based on deep business understanding and customer requirements Directly work with architects and technical leadership to design & implement applications and / or architectural components Architectural proposal and estimation for the application, technical leadership to the team Coordination/Collaboration with central teams for tasks and standards Develop data integration workflow in Azure Developing streaming application using scala. Integrating the end-to-end Azure Databricks pipeline to take data from source systems to target system ensuring the quality and consistency of data. Defining data quality and validation checks. Configuring data processing and transformation. Writing unit test cases for data pipelines. Defining and implementing data quality and validation check. Tuning pipeline configurations for optimal performance. Participate in Peer review and PR review for the code written by team members Qualifications Bachelors degree in computer science, Computer Engineering, relevant technical field, or equivalent; Master’s degree preferred. Additional Information Skills Based on deep technical expertise, capable of working directly with architects and technical leadership Able to guide junior team members in technical questions related to architecture or software & system design Self-starter and empowered professional with strong execution and communication capabilities Proactive mindset: identify and start work independently, challenges status quo, accepts being challenged Outstanding written and verbal communication skills. Key Competencies: 6+ years’ experience in data engineering, ETL tools and working with large data sets. Minimum 5 years of working experience of distributed cluster. At least 5 years of experience in Scala/Java software development. At least 2-3 years of Azure Databricks Cloud experience in Data Engineering Experience of Delta table, ADLS, DBFS, ADF. Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka, Spark, Azure Cloud) Experience with Cloud based SQL Database: Azure SQL Editor Excellent software engineering skills (i.e., data structures, algorithms, software design). Excellent problem-solving, investigative, and troubleshooting skills Experience with CI/CD tools such as Jenkins and Github Ability to work independently. Soft Skills: Good Communication Skills Ability to coach and Guide young Data Engineers Decent Level in English as Business Language
Posted 1 week ago
7.0 - 12.0 years
25 - 32 Lacs
Bengaluru
Remote
Role & responsibilities Job Title: Senior Data Engineer Company: V2Soft India Location: [Remote/BLR] Work Mode: [Remote] Experience: 7+ Years Employment Type: Full-Time About the Role: V2Soft India is looking for a highly skilled and motivated Senior Data Engineer to join our growing team. You will play a critical role in designing, building, and maintaining scalable, secure, and high-performance data platforms to support cutting-edge data products and real-time streaming systems. This is a great opportunity for someone who thrives in solving complex data challenges and wants to contribute to high-impact initiatives. Key Responsibilities: Design and develop scalable, low-latency data pipelines to ingest, process, and stream massive amounts of structured and unstructured data. Collaborate cross-functionally to clean, curate, and transform data to meet business needs. Integrate privacy and security controls into CI/CD pipelines for all data flows. Embed operational excellence practices including error handling, monitoring, logging, and alerting. Continuously improve reliability, scalability, and performance of data systems while ensuring high data quality. Own KPIs related to platform performance, data delivery, and operational efficiency. Required Skills & Experience: 5+ years of hands-on experience in cloud-native, real-time data systems with strong emphasis on streaming, scalability, and reliability . Proficiency in real-time data technologies such as Apache Spark, Apache Flink, AWS Kinesis, Kafka, AWS Lambda, EMR/EKS , and Lakehouse platforms like Delta.io / Databricks . Strong expertise in AWS architecture , including infrastructure automation, CI/CD, and security best practices. Solid understanding of SQL, NoSQL, and relational databases along with SQL tuning . Proficient in Spark-Scala, PySpark, Python , and/or Java . Experience in containerized deployments using Docker, Kubernetes, Helm . Familiarity with monitoring systems for data loss detection and data quality assurance . Deep knowledge of data structures, algorithms , and data engineering design patterns. Passionate about continuous learning and delivering reliable, high-quality solutions. Nice to Have: Certifications in AWS or Big Data technologies Experience with data governance and compliance frameworks Exposure to ML pipelines or AI data workflows Why Join V2Soft? Work with cutting-edge technologies in a fast-paced and collaborative environment Opportunity to contribute to innovative, high-impact data initiatives Supportive team culture and career growth opportunities How to Apply: Submit your updated resume to [mbalaram@v2soft.com].
Posted 1 week ago
5.0 - 8.0 years
18 - 20 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer Java & Kafka Location: Bangalore, India Job Type: Full-Time Experience: 5+ Years About the Role: We are looking for a highly skilled Data Engineer with solid experience in Java and Apache Kafka to join our growing data team in Bangalore. As a key member of our engineering team, you will be responsible for building and optimizing our data pipeline architecture, as well as developing and maintaining data systems to enable scalable, real-time data processing. Key Responsibilities: Design, develop, and maintain scalable, high-performance data pipelines and streaming systems using Java and Apache Kafka . Build and manage reliable data ingestion processes from diverse data sources. Work closely with Data Scientists, Analysts, and other Engineers to integrate and optimize data workflows. Implement data quality, monitoring, and alerting solutions. Ensure robust data governance, security, and compliance standards. Optimize data systems for performance, scalability, and cost efficiency. Participate in code reviews, architecture discussions, and contribute to best practices. Required Skills & Experience: Minimum 5 years of experience in Data Engineering or related backend roles. Strong programming experience with Java (mandatory). Expertise in working with Apache Kafka for real-time streaming and event-driven architectures. Proficiency in building ETL/ELT pipelines and handling large volumes of data. Experience with data storage systems such as HDFS , Hive , HBase , Cassandra , or PostgreSQL . Familiarity with cloud platforms (AWS/GCP/Azure) is a plus. Good understanding of distributed systems, data partitioning, and scalability challenges. Strong problem-solving skills and ability to work independently and in a team. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience with other streaming technologies like Apache Flink , Spark Streaming , or Kafka Streams is a plus. Exposure to containerization tools like Docker and orchestration platforms like Kubernetes .
Posted 1 week ago
5.0 - 10.0 years
0 - 0 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Data Engineer Experience- 5 to 9 years Location: Pune Skillset is below: Candidates should have data engineering experience with Spark. Language can be anything between Scala/Python/Java. Only experienced spark developers can be trained on Scala.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
Job Description: We are looking for a skilled PySpark Developer having 4-5 or 2-3 years of experience to join our team. As a PySpark Developer, you will be responsible for developing and maintaining data processing pipelines using PySpark, Apache Spark's Python API. You will work closely with data engineers, data scientists, and other stakeholders to design and implement scalable and efficient data processing solutions. Bachelor's or Master's degree in Computer Science, Data Science, or a related field is required. The ideal candidate should have strong expertise in the Big Data ecosystem including Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. The candidate should be below 35 years of age and have experience in designing, developing, and maintaining PySpark data processing pipelines to process large volumes of structured and unstructured data. Additionally, the candidate should collaborate with data engineers and data scientists to understand data requirements and design efficient data models and transformations. Optimizing and tuning PySpark jobs for performance, scalability, and reliability is a key responsibility. Implementing data quality checks, error handling, and monitoring mechanisms to ensure data accuracy and pipeline robustness is crucial. The candidate should also develop and maintain documentation for PySpark code, data pipelines, and data workflows. Experience in developing production-ready Spark applications using Spark RDD APIs, Data frames, Datasets, Spark SQL, and Spark Streaming is required. Strong experience of HIVE Bucketing and Partitioning, as well as writing complex hive queries using analytical functions, is essential. Knowledge in writing custom UDFs in Hive to support custom business requirements is a plus. If you meet the above qualifications and are interested in this position, please email your resume, mentioning the position applied for in the subject column at: careers@cdslindia.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,
Posted 1 week ago
5.0 - 8.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Long Description Experienceand Expertise inany of the followingLanguagesat least 1 of them : Java, Scala, Python Experienceand expertise in SPARKArchitecture Experience in the range of 6-10 yrs plus Good Problem SolvingandAnalytical Skills Ability to Comprehend the Business requirementand translate to the Technical requirements Good communicationand collaborative skills with fellow teamandacross Vendors Familiar with development of life cycle includingCI/CD pipelines. Proven experienceand interested in supportingexistingstrategicapplications Familiarity workingwithagile methodology Mandatory Skills: Scala programming.: Experience: 5-8 Years.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough