Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category Engineering Experience Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Associate Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative,inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Associate Data Engineer Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 1.5 years of experience in application development (Internship experience does not apply) At least 1 year of experience in big data technologies Preferred Qualifications: 3+ years of experience in application development including Python, SQL, Scala, or Java 1+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 2+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 1+ years experience working on real-time data and streaming applications 1+ years of experience with NoSQL implementation (Mongo, Cassandra) 1+ years of data warehousing experience (Redshift or Snowflake) 2+ years of experience with UNIX/Linux including basic commands and shell scripting 1+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 2 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Voyager (94001), India, Bangalore, Karnataka Manager, Data Engineering Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you’ll have the opportunity to be on the forefront of driving a major transformation within Capital One. What You’ll Do: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance Basic Qualifications: Bachelor’s Degree At least 4 years of experience in application development (Internship experience does not apply) At least 2 years of experience in big data technologies At least 1 year experience with cloud computing (AWS, Microsoft Azure, Google Cloud) At least 2 years of people management experience Preferred Qualifications: 7+ years of experience in application development including Python, SQL, Scala, or Java 4+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud) 4+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL) 4+ year experience working on real-time data and streaming applications 4+ years of experience with NoSQL implementation (Mongo, Cassandra) 4+ years of data warehousing experience (Redshift or Snowflake) 4+ years of experience with UNIX/Linux including basic commands and shell scripting 2+ years of experience with Agile engineering practices At this time, Capital One will not sponsor a new applicant for employment authorization for this position. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Posted 2 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Senior Engineer at Impetus Technologies, you will play a crucial role in designing, developing, and deploying scalable data processing applications using Java and Big Data technologies. Your responsibilities will include collaborating with cross-functional teams, mentoring junior engineers, and contributing to architectural decisions to enhance system performance and scalability. Your key responsibilities will revolve around designing and maintaining high-performance applications, implementing data ingestion and processing workflows using frameworks like Hadoop and Spark, and optimizing existing applications for improved performance and reliability. You will also be actively involved in mentoring junior engineers, participating in code reviews, and staying updated with the latest technology trends in Java and Big Data. To excel in this role, you should possess a strong proficiency in Java programming language, hands-on experience with Big Data technologies such as Apache Hadoop and Apache Spark, and an understanding of distributed computing concepts. Additionally, you should have experience with data processing frameworks and databases, strong problem-solving skills, and excellent communication and teamwork abilities. In this role, you will collaborate with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. The team environment encourages knowledge sharing, continuous learning, and regular technical workshops to enhance your skills and keep you updated with industry trends. Overall, as a Senior Engineer at Impetus Technologies, you will be responsible for designing and developing scalable Java applications for Big Data processing, ensuring code quality and performance, and troubleshooting and optimizing existing systems to enhance performance and scalability. Qualifications: - Strong proficiency in Java programming language - Hands-on experience with Big Data technologies such as Hadoop, Spark, and Kafka - Understanding of distributed computing concepts - Experience with data processing frameworks and databases - Strong problem-solving skills - Knowledge of version control systems and CI/CD pipelines - Excellent communication and teamwork abilities - Bachelor's or master's degree in Computer Science, Engineering, or related field preferred Experience: 7 to 10 years Job Reference Number: 13131,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
The team at Walmart Global Tech builds reusable technologies to assist in acquiring customers, onboarding and empowering merchants, and ensuring a seamless experience for all stakeholders. They focus on optimizing tariffs and assortment while maintaining Walmart's philosophy of Everyday Low Cost. Additionally, the team creates personalized omnichannel experiences for customers across various platforms. The Marketplace team serves as the gateway for Third-Party sellers, enabling them to manage their onboarding, catalog, orders, and returns. They are responsible for designing, developing, and operating large-scale distributed systems using cutting-edge technologies. As a Staff Data Scientist at Walmart Global Tech, you will be responsible for developing scalable end-to-end data science solutions for data products. You will collaborate with data engineers and analysts to build ML- and statistics-driven data quality workflows. Your role will involve solving business problems by scaling advanced Machine Learning algorithms on large datasets. You will own the MLOps lifecycle, from data monitoring to model lifecycle management. Additionally, you will demonstrate thought leadership by consulting with product and business stakeholders to deploy machine learning solutions. The ideal candidate should have knowledge of machine learning and statistics, experience with web service standards, and proficiency in architecting solutions with Continuous Integration and Continuous Delivery. Strong coding skills in Python, experience with Big Data technologies like Hadoop and Spark, and the ability to work in a big data ecosystem are preferred. Candidates should have experience in developing and deploying machine learning solutions and collaborating with data scientists. Effective communication skills and the ability to present complex ideas clearly are also desired. Educational qualifications in Computer Science, Statistics, Engineering, or related fields are preferred. Candidates with prior experience in Delivery Promise Optimization or Supply Chain domains and hands-on experience in Spark or similar frameworks will be given preference. At Walmart Global Tech, you will have the opportunity to work in a dynamic environment where your contributions can impact millions of people. The team values innovation, collaboration, and continuous learning. Join us in reimagining the future of retail and making a difference on a global scale. Walmart Global Tech offers a hybrid work model that combines in-office and virtual presence. The company provides competitive compensation, incentive awards, and a range of benefits including maternity leave, health benefits, and more. Walmart fosters a culture of belonging where every associate is valued and respected. The company is committed to creating opportunities for all associates, customers, and suppliers. As an Equal Opportunity Employer, Walmart believes in understanding, respecting, and valuing the uniqueness of individuals while promoting inclusivity.,
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
As an experienced professional, you will be responsible for implementing and supporting Hadoop platform-based applications to efficiently store, retrieve, and process terabytes of data. Your expertise will contribute to the seamless functionality of the system. To excel in this role, you should possess 4+ years of core Java or 2+ years of Python experience. Additionally, having at least 1+ years of working experience with the Hadoop stack including HDFS, MapReduce, HBase, and other related technologies is desired. A solid background of 1+ years in database management with MySQL or equivalent systems will be an added advantage. If you meet the qualifications and are enthusiastic about this opportunity, we encourage you to share your latest CV with us at data@scalein.com or reach out to us via our contact page. Your contribution will be pivotal in driving the success of our data management initiatives.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,
Posted 3 days ago
2.0 - 7.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Senior Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 5-7 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:
Posted 4 days ago
0.0 years
0 Lacs
Thane, Maharashtra
On-site
202505329 Thane, Maharashtra, India Bevorzugt Description Key Responsibilities Develop and deploy predictive models to support financial planning, budgeting, and forecasting Analyze large datasets to identify trends, anomalies, and opportunities for cost optimization and top-line growth Collaborate with cross-functional teams including Finance, HR, Sales Ops, and IT to align data science initiatives with business goals Proactively mines data lakes / warehouses to identify trends and patterns and generates insights for business units and senior leadership Design and implement machine learning algorithms for financial risk assessment and scenario analysis Create dashboards and visualizations to communicate insights to stakeholders using tools like Power BI or Tableau Ensure data integrity and compliance with internal governance and regulatory standards Contribute to the development of data products and automation tools that enhance financial decision-making Qualifications Required Qualifications Bachelor’s or master’s degree in data science, statistics, mathematics, computer science, finance, or a related field 3+ years of experience in a data science role, preferably within a financial or insurance environment Proficiency in Python, R, SQL, and experience with cloud platforms (AWS, Azure, or GCP) Strong understanding of statistical modeling, machine learning, and financial analytics Knowledge of distributed data/computing tools such as MapReduce, Hadoop, Hive, or Kafka Experience with data visualization tools (e.g., Tableau, Power BI) Excellent communication skills with the ability to explain complex concepts to non-technical stakeholders Strong business acumen and a proactive, problem-solving mindset Strong understanding of AI, its potential roles in solving business problems, and the future trajectory of generative AI models Preferred Qualifications Experience in financial services, insurance, or actuarial domains Familiarity with financial reporting standards and regulatory requirements Knowledge of tools like Microsoft Azure / Fabric, Azure Synapse Analytics, Databricks, Snowflake, or similar big data platforms What We Offer A collaborative and inclusive work environment Opportunities for professional growth and development Competitive compensation and benefits package The chance to work on impactful projects that shape the future of finance at WTW
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
madurai, tamil nadu
On-site
As a DBA Expert, you will be required to have strong knowledge of CouchDB architecture, including replication and clustering. Your expertise should include experience with JSON, MapReduce functions, and query optimization for efficient data retrieval. You should be skilled in CouchDB performance tuning and effective indexing strategies to ensure optimal database functionality. Additionally, familiarity with monitoring tools like Prometheus, Grafana, etc., for real-time performance tracking is essential. Your responsibilities will also include understanding backup strategies, including full backups, incremental backups, replication-based backups, automated scheduling, storage policies, and restoration processes. You should be capable of troubleshooting database locks, conflicts, and indexing delays to prevent performance bottlenecks and ensure smooth operations. The ideal candidate for this role should possess strong experience with Apache CouchDB and a solid understanding of NoSQL database concepts, document stores, and JSON-based structures. Experience with CouchDB replication, clustering, and eventual consistency models is a must. Knowledge of data backup/recovery and security best practices is also crucial for this position. In addition to technical expertise, you should have excellent problem-solving and communication skills to effectively collaborate with team members and stakeholders. Your ability to analyze complex database issues and propose effective solutions will be key to success in this role.,
Posted 4 days ago
5.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Requirements Job Requirements Role/ Job Title: Data Engineer - Gen AI Function/ Department: Data & Analytics Place of Work: Mumbai Job Purpose The data engineer will be working with our data scientists who are building solutions using generative AI in the domain of text, audio and images and tabular data. They will be responsible for working with large volumes of structured and unstructured data in its storage, retrieval and augmentation with our GenAI solutions which use the said data. Job & Responsibilities Build data engineering pipeline focused on unstructured data pipelines Conduct requirements gathering and project scoping sessions with subject matter experts, business users, and executive stakeholders to discover and define business data needs in GenAI. Design, build, and optimize the data architecture and extract, transform, and load (ETL) pipelines to make them accessible for Data Scientists and the products built by them. Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer, versed with API and its usability Drive the highest standards in data reliability, data integrity, and data governance, enabling accurate, consistent, and trustworthy data sets A suitable candidate will also demonstrate experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, MongoDB, DynamoDB, etc. Creating Technical Design Documentation of the projects/pipelines Good skills in technical debugging of the code in case of issues. Also, working with git for code versioning Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA Experience Range : 5-10 years of relevant experience
Posted 4 days ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Architect at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Architect, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Preferred Education Master's Degree Required Technical And Professional Expertise Strong understanding data lake approaches, industry standards and industry best practices. Detail level understanding of HADOOP Framework, Ecosystem, MapReduce, and Data on Containers (data in OpenShift). Applies individual experiences / competency and IBM architecting structure thinking model to analyzing client IT systems. Experience with relational SQL, Big Data etc Experienced with Cloud native platforms such as AWS, Azure, Google, IBM Cloud or Cloud Native data platforms like Snowflake Preferred Technical And Professional Experience Knowledge of MS-Azure Cloud Experience in Unix shell scripting and python
Posted 5 days ago
3.0 - 4.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks a Data Engineering Software Engineer to join our Attribution team. The Attribution Team specialises in designing and managing data pipelines that capture and connect user engagement data to optimise ad performance. We engineer scalable solutions for accurate and real-world attribution across the GroundTruth ecosystem. We engineer seamless data flows that fuel reliable analytics and decision-making using big data technologies, such as MapReduce, Spark, and Glue. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As a Software Engineer (SE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Create and maintain various ingestion pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the data platform. You Have Tech./B.E./M.Tech./MCA or equivalent in computer science 3-4 years of experience in Software Engineering Experience with data ingestion pipeline. Experience with AWS Stack used for Data engineering EC2, S3, EMR, ECS, Lambda, and Step functions Hands-on experience with Python/Java for the orchestration of data pipelines Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs. Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance. Experience with Amazon Web Services and Docker. Configuration management and QA practices. Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools,techniques, and products to translate system requirements into the design anddevelopment of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation for Java, Springboot, API, Microservices, Security Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.
Posted 6 days ago
3.0 years
4 Lacs
Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 6 days ago
3.0 years
0 Lacs
Delhi, Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 1 week ago
2.0 - 4.0 years
25 - 30 Lacs
Pune
Work from Office
Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. How You Will Fulfill Your Potential As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders. About Goldman Sachs Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description To work on Data Analytical to triage and investigate data quality and data pipeline exceptions and reporting issues. Requirements This role will support Data Operations and Reporting related projects but also will be helping with other projects as well if needed. In this role, you will leverage your strong analytical skills to triage and investigate data quality and data pipeline exceptions and reporting issues. The ideal candidate should be able to work independently and actively engage other functional teams as needed. This role requires researching transactions and events using large amounts of data. Technical Experience/Qualifications: At least 5 years of experience in software development At least 5 years of SQL experience in any RDBMS Minimum 5 years of experience in Python Strong analytical and problem-solving skill Strong communication skill Strong experience with data modeling Strong experience in data analysis and reporting. Experience with version control tools such as GitHub etc. Experience with shell scripting and Linux Knowledge of agile and scrum methodologies Preferred experience in Hive SQL or related technologies such as Big Query etc. Preferred experience in Big data technologies like Hadoop, AWS/GCP, S3, HIVE, Impala, HDFS, Spark, MapReduce Preferred experience in reporting tools such as Looker or Tableau etc. Preferred experience in finance and accounting but not required Job responsibilities Responsibilities: Develop SQL queries as per technical requirements Investigate and fix day to day data related issues Develop test plan and execute test script Data validation and analysis Develop new reports/dashboard as per technical requirements Modify existing reports/dashboards for bug fixes and enhancements Develop new ETL scripts and modify existing in case of bug fixes and enhancements Monitoring of ETL processes and fix issues in case of failure Monitor scheduled jobs and fix issues in case of failure Monitor data quality alerts and act on it What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers
Posted 1 week ago
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 6 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) AWS: 4 years (Required) Work Location: In person
Posted 1 week ago
0 years
2 - 3 Lacs
Chennai
On-site
Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
0 years
6 - 17 Lacs
India
Remote
A Big Data/Hadoop Developer designs, develops, and maintains Hadoop-based solutions. This role involves working with the Hadoop ecosystem to build data processing pipelines, write MapReduce jobs, and integrate Hadoop with other systems. They collaborate with data scientists and analysts to gather requirements and deliver insights, ensuring efficient data ingestion, transformation, and storage. Key Responsibilities: Design and Development: Creating, implementing, and maintaining Hadoop applications, including designing data processing pipelines, writing MapReduce jobs, and developing efficient data ingestion and transformation processes. Hadoop Ecosystem Expertise: Having a strong understanding of the Hadoop ecosystem, including components like HDFS, MapReduce, Hive, Pig, HBase, and tools like Flume, Zookeeper, and Oozie. Data Analysis and Insights: Analyzing large datasets stored in Hadoop to uncover valuable insights and generate reports. Collaboration: Working closely with data scientists, analysts, and other stakeholders to understand requirements and deliver effective solutions. Performance Optimization: Optimizing the performance of Hadoop jobs and data processing pipelines, ensuring efficient resource utilization. Data Security and Privacy: Maintaining data security and privacy within the Hadoop environment. Documentation and Best Practices: Creating and maintaining documentation for Hadoop development, including best practices and standards. Skills and Qualifications: Strong Programming Skills: Proficient in languages like Java, Python, or Scala, and experience with MapReduce programming. Hadoop Framework Knowledge: Deep understanding of the Hadoop ecosystem and its core components. Data Processing Tools: Experience with tools like Hive, Pig, HBase, Spark, and Kafka. Data Modeling and Analysis: Familiarity with data modeling techniques and experience in analyzing large datasets. Problem-Solving and Analytical Skills: Ability to troubleshoot issues, optimize performance, and derive insights from data. Communication and Collaboration: Effective communication skills to collaborate with diverse teams and stakeholders. Linux Proficiency: Familiarity with Linux operating systems and basic command-line operations. Tamil candidates only Job Type: Full-time Pay: ₹633,061.90 - ₹1,718,086.36 per year Benefits: Food provided Work from home Work Location: In person
Posted 1 week ago
6.0 - 11.0 years
22 - 27 Lacs
Pune, Bengaluru
Work from Office
Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII
Posted 1 week ago
7.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* As a Senior Hadoop Developer to develop Hadoop components in SDP (strategic data platform), individual will be responsible for understanding design, propose high level and detailed design solutions, and ensure that coding practices/quality comply with software development standards. Working as an individual contributor in projects, person should have good analytical skills to take a quick decision during the tough times, Person should have good knowledge writing complex queries in a larger cluster. Engage in discussions with architecture teams for coming out with design solutions, proposing new technology adoption ideas, attending project meetings, partnering with near shore and offshore teammates in an agile environment, coordinating with other application teams, development, testing, upstream and downstream partners, etc. Responsibilities: Develop high-performance and scalable Analytics solutions using the Big Data platform to facilitate the collection, storage, and analysis of massive data sets from multiple channels. Develop efficient utilities, data pipelines, ingestion frameworks that can be utilized across multiple business areas. Utilize your in-depth knowledge of Hadoop stack and storage technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Hue, and Oozie, to design and optimize data processing workflows. Data analysis, coding, Performance Tunning, propose improvement ideas, drive the development activities at offshore. Analyze complex Hive Queries, able to modify Hive queries, tune Hive Queries Hands on experiences writing scripts in python/shell scripts and modify scripts. Provide guidance and mentorship to junior teammates. Work with the strategic partners to understand the requirements work on high level & detailed design to address the real time issues in production. Partnering with near shore and offshore teammates in Agile environment, coordinating with other application teams, development, testing, up/down stream partners, etc. Hands on experiences writing scripts in python/shell scripts and modify scripts. Work on multiple projects concurrently, take ownership & pride in the work done by them, attending project meetings, understanding requirements, designing solutions, developing code. Identify gaps in technology and propose viable solutions. Identify improvement areas within the application and work with the respective teams to implement the same. Ensuring adherence to defined process & quality standards, best practices, high quality levels in all deliverables. Desired Skills* Data Lake Architecture: Understanding of Medallion architecture ingestion Frameworks: Knowledge of ingestion frameworks like structured, unstructured, and semi structured Data Warehouse: Familiarity with Apache Hive and Impala Performs Continuous Integration and Continuous Development (CI-CD) activities. Hands on experience working in a Cloudera data platform (CDP) to support the Data Science Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Extensive hands-on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment) Experience with model deployment, scoring and monitoring for batch and real-time on various technologies and platforms. Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration. Experience in automation for deployment using Ansible Playbooks, scripting. Experience with developing and building RESTful API services in an efficient and scalable manner. Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs) Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training. Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Requirements* Education* Graduation / Post Graduation Experience Range* 7 to 9 years Foundational Skills Hadoop, Hive, Sqoop, Impala, Unix/Linux scripts. Desired Skills Python, CI/CD, ETL. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Chennai
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough