Jobs
Interviews

8340 Hadoop Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Big Data Engineer with over 5 years of experience, you will be responsible for leading a team of engineers and demonstrating a positive attitude towards learning and implementing solutions in the domain. Your expertise will primarily lie in Spark, with proficiency in Scala/Java, Airflow Orchestration, and AWS. Your role will involve defining the system's scope and delivering effective Big Data solutions, while also collaborating with software research and development teams. In this position, you will be required to train staff on data resource management, utilizing your strong educational background with an Engineering or Master's degree in computer engineering or computer science. Your in-depth knowledge of Hadoop, Spark, and similar frameworks will be essential in driving innovative solutions. Moreover, your excellent interpersonal and communication skills will facilitate effective collaboration within the team. Your ability to solve complex networking, data, and software issues will be crucial in ensuring the successful implementation of Big Data solutions.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The ideal candidate for this role should have strong skills in AWS EMR, EC2, AWS S3, Cloud Formation Template, Batch data, and AWS Code Pipeline services. It would be an added advantage to have experience with EKS. As this is a hands-on role, the candidate will be expected to have good administrative knowledge of AWS EMR, EC2, AWS S3, Cloud Formation Template, and Batch data. Responsibilities include managing and deploying EMR Clusters, with a solid understanding of AWS account and IAM. The candidate should also have experience in administrative tasks related to EMR Persistent Cluster and Transient Cluster. It is essential for the candidate to possess a good understanding of AWS Cloud Formation, cluster setup, and AWS network. Hands-on experience with Infrastructure as Code for Deployment tools like Terraform is highly desirable. Additionally, experience in AWS health monitoring and optimization is required. Knowledge of Hadoop and Big Data will be considered as an added advantage for this position.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

We Are Looking For 8+years experienced candidates for this role. Job Location : Technopark, Trivandrum. Experience : 8+ years of experience in Microsoft SQL Server administration. Primary skills : Strong experience in Microsoft SQL Server : Bachelor's degree in computer science, software engineering or a related field. Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) will be an advantage. Secondary Skills Experience in MySQL, PostgreSQL, and Oracle database administration. Exposure to Data Lake, Hadoop, and Azure technologies. Exposure to DevOps or ITIL. Main Duties/responsibilities Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. Database Maintenance: Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. Health Monitoring: Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. Proactive Issue Resolution: Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. High Availability: Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication). (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Site Reliability Engineering Manager at o9 Solutions, you will have the opportunity to work for an AI-based Unicorn that is recognized as one of the fastest-growing companies on the Inc. 5000 list. Your role will involve leading a team of talented SRE professionals to maintain and execute organizational policies and procedures for change management, configuration management, release and deployment management, service monitoring, problem management, and support the o9 Digital Brain Platform across major cloud providers like AWS, GCP, Azure, and Samsung Cloud. In this role, you will be empowered to continuously challenge the status quo and implement innovative ideas to create value for o9 clients. Your responsibilities will include deploying, maintaining, and supporting o9 digital Brain SaaS environments on all major clouds, managing the SRE team, hiring and growing SRE talent, leading, planning, building, configuring, testing, and deploying software and systems to manage platform infrastructure and applications. You will collaborate with internal and external customers to manage o9 platform deployment, maintenance, and support needs, improve reliability, quality, cost, time-to-deploy, and time-to-upgrade, monitor, measure, and optimize system performance, provide on-call support on rotation/shift basis, analyze and approve code and configuration changes, and work with teams globally across different time zones. To qualify for this role, you should have a Bachelor's degree in computer science, Software Engineering, Information Technology, Industrial Engineering, or Engineering Management, along with at least 9 years of experience building and leading high-performing diverse teams as an SRE manager or DevOps Manager. You should also have experience in cloud administration and Kubernetes certification. Additionally, you should have 5+ years of experience in an SRE role, deploying and maintaining applications, performance tuning, conducting application upgrades, patches, supporting continuous integration and deployment tooling, and experience with cloud platforms like AWS, Azure, or GCP, as well as Docker, Kubernetes, and supporting big data platforms. You should possess strong skills in operating system concepts, Linux, troubleshooting, automation, cloud, Jenkins, Ansible, Terraform, ArgoCD, and administration of databases. At o9 Solutions, we value team spirit, transparency, frequent communication, and offer a flat organizational structure with an entrepreneurial culture. We provide a supportive network, a diverse international working environment, and a work-life balance. If you are passionate about learning, adapting to new technology, and making a difference in a scale-up environment, we encourage you to apply and be part of our mission to digitally transform planning and decision-making for enterprises and the planet.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Genpact (NYSE: G) is a global professional services and solutions firm dedicated to shaping the future by delivering impactful outcomes. With a team of over 125,000 professionals spread across more than 30 countries, we are characterized by our inherent curiosity, entrepreneurial spirit, and commitment to creating enduring value for our clients. Fueled by our overarching purpose of continually striving towards a world that functions better for individuals, we partner with and enhance leading enterprises, including members of the prestigious Fortune Global 500. Our core competencies revolve around in-depth business and industry expertise, digital operational services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Business Analyst - Data Scientist to join our dynamic team. As a Business Analyst - Data Scientist at Genpact, you will play a pivotal role in the development and implementation of NLP (Natural Language Processing) models and algorithms, extracting actionable insights from textual data, and collaborating with cross-functional teams to deliver innovative AI solutions. **Responsibilities:** **Model Development:** - Proficiency in various statistical, machine learning, and ensemble algorithms. - Strong understanding of time series algorithms and forecasting use cases. - Ability to discern the strengths and weaknesses of different models and select appropriate ones for specific problems. - Proficiency in evaluating metrics and recommending suitable evaluation metrics for different problem types. **Data Analysis:** - Extracting meaningful insights from structured data. - Preprocessing data for machine learning/artificial intelligence applications. **Collaboration:** - Close collaboration with data scientists, engineers, and business stakeholders. - Providing technical guidance and mentorship to team members. **Integration and Deployment:** - Integrating machine learning models into production systems. - Implementing CI/CD pipelines for continuous integration and deployment. **Documentation and Training:** - Documenting processes, models, and results. - Providing training and support to stakeholders on NLP techniques and tools. **Qualifications we seek in you:** **Minimum Qualifications / Skills:** - Bachelor's degree in computer science, engineering, or a related field. - Proficient programming skills in Python and R. - Experience with data science frameworks such as SKLEARN and NUMPY. - Knowledge of machine learning concepts and frameworks like TensorFlow and PyTorch. - Strong problem-solving and analytical capabilities. - Excellent communication and collaboration skills. **Preferred Qualifications/Skills:** - Experience in predictive analytics and machine learning techniques. - Proficiency in Python/R or any other open-source programming language. - Building and implementing models, using algorithms, and running simulations with various tools. - Familiarity with visualization tools such as Tableau, Power BI, Qlikview, etc. - Proficiency in applied statistics skills, including distributions, statistical testing, regression, etc. - Knowledge and experience in various tools and techniques like forecasting, linear regression, logistic regression, machine learning algorithms (e.g., Random Forest, Gradient Boosting, SVM, XGBoost, Deep Learning), etc. - Experience with big data technologies (Hadoop, Spark). - Familiarity with cloud platforms like AWS, Azure, GCP. **Job Details:** **Title:** Business Analyst - Data Scientist **Primary Location:** India-Hyderabad **Education Level:** Bachelor's / Graduation / Equivalent **Job Posting:** Apr 1, 2025, 2:47:23 AM **Unposting Date:** May 1, 2025, 1:29:00 PM **Master Skills List:** Digital **Job Category:** Full Time,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Hadoop Developer, you will be responsible for developing a semantic model in the data lake to centralize transformation workflows currently managed in Qlik. Your expertise in data modeling, ETL pipeline development, and performance optimization will play a crucial role in enabling seamless data consumption for analytics and reporting purposes. Your key responsibilities will include translating Qlik-specific transformation logic into Hadoop/Impala-based processing, developing modular and reusable transformation layers to enhance scalability and flexibility, optimizing the semantic layer for high performance, and ensuring seamless integration with dashboarding tools. Additionally, you will design and implement ETL pipelines using Python to streamline data ingestion, transformation, and storage processes. Collaboration with data analysts, BI teams, and business stakeholders will be essential to align the semantic model with reporting requirements. Furthermore, you will be responsible for monitoring, troubleshooting, and enhancing data processing workflows to ensure reliability and efficiency. The ideal candidate for this position will possess strong experience in Hadoop, Impala, and distributed data processing frameworks. Proficiency in Python for ETL pipeline development and automation is required, along with a good command of SQL and performance tuning for large-scale datasets. Knowledge of data modeling principles and best practices for semantic layers, familiarity with Qlik transformation logic, and the ability to translate it into scalable processing are also essential. Additionally, familiarity with big data performance tuning and optimization strategies, strong problem-solving skills, and the ability to work in a fast-paced environment are key qualifications for this role. If you have the required skills and qualifications and are interested in this opportunity, please forward your updated resume to vidhya@thinkparms.in.,

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title: Full Stack Developer with Java, SQL, React, Python Location: Pune, India Corporate Title: VP Role Description Technology underpins our entire business. Our Technology, Data and Innovation (TDI) strategy is focused on strengthening engineering expertise, introducing an agile delivery model, as well as modernising the bank's IT infrastructure. We continue to invest and build a team of visionary tech talent, providing you with the training, freedom and opportunity to do pioneering work. As an [Engineer] you will develop and deliver significant components of engineering solutions to satisfy complex and diverse business goals. You will engage and partner with the business whilst working within a broader creative, collaborative and innovative team, with a strong desire to make an impact. You will be joining the dbSleuth Team within Regulatory & Cross Product IT delivering Trader and Counterparty surveillance across all business sections of Deutsche Bank. We are an engineering focused organization, striving for the highest quality architecture, design and code across our teams. You will help to build our surveillance systems, working in a fast-paced, agile environment. Our workload for new deliveries is high, using, React for UI development, Python/Spark/Scala for services, Hadoop Big Data and data science for anomaly detection using machine learning and statistical risk models. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Provide leadership within a delivery team, Modelling, Coding & testing, and collaborating to understand requirements, create stories, design solutions, implement them and help test them. Help create a culture of learning and continuous improvement within your team and be accountable for successful delivery of a regulatory critical workstream Employ a range of techniques to analyse problems and evaluate multiple solutions against engineering, business & strategic criteria Identify and resolve barriers to business deliveries implementing solutions which iteratively deliver value Design solutions using common design patterns with a range of design tools & techniques Conduct peer reviews to ensure designs are fit for purpose, extensible & re-usable Design & build solutions which are secure & controlled Your Skills And Experience Analytical thinker, team player and possess strong communication skills Enable experimentation and fast learning approaches to creating business solutions Familiar in the use of solution design tools Understand key elements of security, risk & control Track record in identifying and making improvements to the delivery process Working with very large datasets using technologies such as Python, React JS and SQL and utilizing a good understanding of UI functioning & infrastructure. Utilizing Data Modelling tools, Domain Driven design and a strong knowledge of SQL and advanced data analysis to deliver good quality code within enterprise scale development (CI/CD) Experience with development utilising SDLC tools - Git, JIRA, Artifactory, Jenkins/TeamCity, OpenShift How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer with over 4 years of experience, you will be responsible for designing, developing, and maintaining scalable data pipelines to facilitate efficient data extraction, transformation, and loading (ETL) processes. Your role will involve architecting and implementing data storage solutions such as data warehouses, data lakes, and data marts that align with the organization's business needs. It will be crucial for you to implement robust data quality checks and cleansing techniques to ensure data accuracy and consistency. Your responsibilities will also include optimizing data pipelines for enhanced performance, scalability, and cost-effectiveness. Collaboration with data analysts and data scientists to understand data requirements and translate them into technical solutions will be a key aspect of your role. Additionally, you will be required to develop and maintain data security measures to ensure data privacy and regulatory compliance. Automation of data processing tasks using scripting languages like Python and Bash, as well as big data frameworks such as Spark and Hadoop, will be part of your daily tasks. Monitoring data pipelines and infrastructure for performance, as well as troubleshooting any issues that may arise, will be essential. Staying up to date with the latest trends and technologies in data engineering, including cloud platforms like AWS, Azure, and GCP, will also be expected. Documentation of data pipelines, processes, and data models for maintainability and knowledge sharing, as well as contribution to the overall data governance strategy and best practices, will be integral to your role. Qualifications for this position include a strong understanding of data architectures, data modeling principles, and ETL processes. Proficiency in SQL (e.g., MySQL, PostgreSQL) and experience with big data querying languages like Hive and Spark SQL are required. Experience with scripting languages for data manipulation and automation, familiarity with distributed data processing frameworks like Spark and Hadoop, and knowledge of cloud platforms for data storage and processing will be advantageous. Candidates should also possess experience with data quality tools and techniques, excellent problem-solving, analytical, and critical thinking skills, as well as strong communication, collaboration, and teamwork abilities. Key Skills: Spark, Hadoop, Python, Windows Azure, AWS, SQL, HiveQL,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5+ years of experience with expertise in Data Engineering. Your hands-on experience should include design and development of big data platforms. The ideal candidate will have deep understanding of modern data processing technology stacks such as Spark, HBase, Hive, and other Hadoop ecosystem technologies, with a focus on development using Scala. Additionally, you should possess deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices, is required. Understanding of the theory and application of Continuous Integration/Delivery is a plus. Familiarity with NoSQL technologies, including column family, graph, document, and key-value data storage technologies, is desirable. A passion for software craftsmanship is essential for this role. Experience in the Financial Industry would be beneficial.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

We are seeking a skilled and seasoned Senior Data Engineer to become a valued member of our innovative team. The ideal candidate should possess a solid foundation in data engineering and demonstrate proficiency in Azure, particularly Azure Data Factory (ADF), Azure Fabric, Databricks, and Snowflake. In this role, you will be responsible for the design, construction, and upkeep of data pipelines, ensuring data quality and accessibility, as well as collaborating with various teams to support our data-centric initiatives. Your responsibilities will include crafting, enhancing, and sustaining robust data pipelines utilizing tools such as Azure Data Factory, Azure Fabric, Databricks, and Snowflake. Moreover, you will work closely with data scientists, analysts, and stakeholders to comprehend data requirements, guarantee data availability, and maintain data quality. Implementing and refining ETL processes to efficiently ingest, transform, and load data from diverse sources into data warehouses, data lakes, and Snowflake will also be part of your role. Furthermore, you will play a crucial role in ensuring data integrity and security by adhering to best practices and data governance policies. Monitoring and rectifying data pipelines for timely and accurate data delivery, as well as optimizing data storage and retrieval processes to enhance performance and scalability, will be among your key responsibilities. Staying abreast of industry trends and best practices in data engineering and cloud technologies is essential, along with mentoring and providing guidance to junior data engineers. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Additionally, you must have over 5 years of experience in data engineering, with a strong emphasis on Azure, ADF, Azure Fabric, Databricks, and Snowflake. Proficiency in SQL, experience in data modeling and database design, and solid programming skills in Python, Scala, or Java are prerequisites. Familiarity with big data technologies like Apache Spark, Hadoop, and Kafka, as well as a sound grasp of data warehousing concepts and solutions, including Azure Synapse Analytics and Snowflake, are highly desirable. Knowledge of data governance, data quality, and data security best practices, exceptional problem-solving skills, and effective communication and collaboration abilities within a team setting are essential. Preferred qualifications include experience with other Azure services such as Azure Blob Storage, Azure SQL Database, and Azure Cosmos DB, familiarity with DevOps practices and tools for CI/CD in data engineering, and certifications in Azure Data Engineering, Snowflake, or related areas.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,

Posted 1 week ago

Apply

6.0 - 14.0 years

0 Lacs

haryana

On-site

As a Data Engineer with Python + SQL, you will be responsible for leveraging your 6 to 14 years of experience to create data solutions using Python scripting and SQL. Your strong knowledge of Object Oriented and functional programming concepts will be key in developing efficient and effective pipelines. While proficiency in Python is required, experience with Java, Ruby, Scala, or Clojure will also be considered. In this role, you will be integrating services to build pipeline solutions in various cloud platforms such as AWS, Hadoop, EMR, Azure, and Google Cloud. While AWS experience is a plus, it is not required. Additionally, having experience with Relational and NoSQL databases will be beneficial. As a valuable member of the team, it would be advantageous to have DevOps or Data Ops experience. Your ability to work in a hybrid office environment for 3 days a week in Gurgaon will ensure seamless collaboration with the team. Join us and contribute to the development of innovative data solutions!,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a part of the Talent Acquisition team at Tesco, you will play a crucial role in representing Talent Acquisition in various forums and seminars related to process, compliance, and audit. Additionally, you will be responsible for driving a Continuous Improvement (CI) culture, implementing CI projects, and fostering innovation within the team. Your role will involve engaging with business and functional partners to gain a deep understanding of business priorities. You will be required to ask relevant questions and translate the insights into an analytical solution document. This document will highlight how the application of data science can enhance decision-making processes. To excel in this role, you must possess a strong understanding of techniques for preparing analytical data sets from multiple complex sources. You will be expected to develop statistical models and machine learning algorithms with a high level of competency. Furthermore, you will need to write structured, modularized, and codified algorithms using Continuous Improvement principles. In addition to building algorithms, you will create an easy-to-understand visualization layer on top of the analytical models. This visualization layer will empower end-users to make informed decisions. You will also be responsible for proactively promoting the adoption of solutions developed by the team and identifying areas for improvement within the larger Tesco business. Keeping abreast of the latest trends in data science and retail analytics is essential for this role. You will be expected to share your knowledge with colleagues and mentor a small team of Applied Data Scientists to deliver impactful analytics projects. Your responsibilities will include leading solution scoping and development to facilitate the collaboration between Enterprise Analytics teams and Business teams across Tesco. It is imperative to adhere to the Business Code of Conduct, act with integrity, and fulfill specific risk responsibilities related to Talent Acquisition, process compliance, and audit. To thrive in this role, you will need expertise in Applied Math, including Applied Statistics, Regression, Decision Trees, Forecasting, and Optimization algorithms. Proficiency in SQL, Hadoop, Spark, Python, Tableau, MS Excel, MS PowerPoint, and GitHub is also required. Additionally, having a basic understanding of the Retail domain and soft skills such as Analytical Thinking, Problem-solving, Storyboarding, and Stakeholder engagement will be beneficial. Joining Tesco's team in Bengaluru offers you the opportunity to be part of a multi-disciplinary team that aims to serve customers, communities, and the planet better each day. By standardizing processes, delivering cost savings, leveraging technological solutions, and empowering colleagues, Tesco in Bengaluru strives to create a sustainable competitive advantage. With a focus on reducing complexity and offering high-quality services, you will contribute to Tesco's mission of providing exceptional experiences for customers worldwide. Tesco Technology is a diverse team of over 5,000 experts located in various countries, including India. The Technology division encompasses roles in Engineering, Product Development, Programme Management, Service Desk Operations, Systems Engineering, Security & Capability, Data Science, and more. Established in 2004, Tesco in Bengaluru plays a vital role in enhancing customer experiences and streamlining operations for millions of customers and over 330,000 colleagues globally.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

BizViz is a company that offers a comprehensive view of a business's data, catering to various industries and meeting the diverse needs of business executives. With a dedicated team of over 50 professionals working on the BizViz platform for several years, the company aims to develop technological solutions that provide our clients with a competitive advantage. At BizViz, we are committed to the success of our customers, striving to create applications that align with their unique visions and requirements. We steer clear of generic ERP templates, offering businesses a more tailored solution. As a Big Data Engineer at BizViz, you will join a small, agile team of data engineers focused on building an innovative big data platform for enterprises dealing with critical data management and diverse application stakeholders at scale. The platform handles data ingestion, warehousing, and governance, allowing developers to create complex queries efficiently. With features like automatic scaling, elasticity, security, logging, and data provenance, our platform empowers developers to concentrate on algorithms rather than administrative tasks. We are seeking engineers who are eager for technical challenges, to enhance our current platform for existing clients and develop new capabilities for future customers. Key Responsibilities: - Work as a Senior Big Data Engineer within the Data Science Innovation team, collaborating closely with internal and external stakeholders throughout the development process. - Understand the needs of key stakeholders to enhance or create new solutions related to data and analytics. - Collaborate in a cross-functional, matrix organization, even in ambiguous situations. - Contribute to scalable solutions using large datasets alongside other data scientists. - Research innovative data solutions to address real market challenges. - Analyze data to provide fact-based recommendations for innovation projects. - Explore Big Data and other unstructured data sources to uncover new insights. - Partner with cross-functional teams to develop and execute business strategies. - Stay updated on advancements in data analytics, Big Data, predictive analytics, and technology. Qualifications: - BTech/MCA degree or higher. - Minimum 5 years of experience. - Proficiency in Java, Scala, Python. - Familiarity with Apache Spark, Hadoop, Hive, Spark SQL, Spark Streaming, Apache Kafka. - Knowledge of Predictive Algorithms, Mllib, Cassandra, RDMS (MYSQL, MS SQL, etc.), NOSQL, Columnar Databases, Big table. - Deep understanding of search engine technology, including Elasticsearch/Solr. - Experience in Agile development practices such as Scrum. - Strong problem-solving skills for designing algorithms related to data cleaning, mining, clustering, and pattern recognition. - Ability to work effectively in a matrix-driven organization under varying circumstances. - Desirable personal qualities: creativity, tenacity, curiosity, and a passion for technical excellence. Location: Bangalore To apply for this position, interested candidates can send their applications to careers@bdb.ai.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You are invited to join our team in Chennai as a Talend Developer on a contract basis for a duration of 3 months. Your primary responsibility will involve designing, developing, and implementing data integration solutions utilizing Talend Data Integration tools. This position is tailored for individuals who excel in a dynamic, project-oriented setting and possess a solid foundation in ETL development. Your key duties will include crafting and executing scalable ETL processes through Talend Open Studio/Enterprise. You will be tasked with merging data from various sources into target systems while ensuring data quality and coherence. Collaboration with Data Architects, Analysts, and fellow developers will be essential to grasp data requirements and transform them into technical remedies. Moreover, optimizing and fine-tuning ETL jobs for enhanced performance and reliability will be part of your routine tasks. It will also be your responsibility to create and uphold technical documentation related to ETL processes and data workflows, as well as troubleshoot and resolve ETL issues and production bugs. An ideal candidate for this role should possess a minimum of 3 years of hands-on experience with Talend Data Integration. Proficiency in ETL best practices, data modeling, and data warehousing concepts is expected. Additionally, a strong command of SQL and experience working with relational databases such as Oracle, MySQL, and PostgreSQL is essential. Knowledge of Big Data technologies like Hadoop, Spark, and Hive is advantageous, as is familiarity with cloud platforms like AWS, Azure, and GCP. Your problem-solving skills, ability to work independently, and excellent communication and teamwork abilities will be critical to your success in this role. This is a contractual/temporary position that requires your presence at the office for the duration of the 3-month contract term.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Key Responsibilities: The Senior Big Data Engineer will be working very closely with and managing the work of a team of data engineers working on our Big Data Platform. The tech lead will need below core skills: Work closely with the Olympus core & product processor teams and drive the build out & implementation of the CitiDigital reporting using Olympus framework Accountable for all phases of development process – analysis, design, construction, testing and implementation in agile development lifecycles Perform Unit Testing, System Testing for all applications developed / enhancements and ensure that all critical and high-severity bugs are addressed. Subject Matter Expert (SME) in at least one area of Applications Development Align to Engineering Excellence Development principles and standards Promote and increase our Development Productivity scores for coding Fully adhere to and evangelize a full Continuous Integration and Continuous Deploy pipeline Strong SQL skills to extract, analyze and reconcile huge data sets Demonstrate ownership and initiative taking Project will run in iteration lifecycles with agile practices, so experience of agile development and scrums is highly beneficial. Qualifications: Bachelor's degree/University degree or equivalent experience, Master's degree preferred 8-12 year’s experience in application / software development Skills: Prior work Experience in Capital/Regulatory Market or related industry Experience with Big Data technologies (Spark, Hadoop, HDFS, Hive, Impala) Experience with Python/Scala and Unix Shell scripting is a must Excellent analytical, problem solving, negotiating, influencing, facilitation, prioritization, decision-making and conflict resolution skills are required Solid understanding of the Big Data architecture and the ability to trouble shoot development / Performance issues on Hadoop (Cloudera preferably) Strong data analysis skills and the ability to slice and dice the data as needed for business reporting Passionate, self-driven with can do attitude Able to build practical solutions Good Team player, who can work with global team model and deadline oriented The candidate is expected to be dynamic, flexible with a high energy level as this is a demanding and rapidly changing environment. Ability to work independently given general guidance Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

7.0 - 10.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Min Work Experience 7-10 Years Role: Data Analyst Key Responsibility Core Banking domains knowledge covering, Customer, KYC, Trade, Payments, loans deposits etc. Risk Management experience on BASEL, IFRS9, Risk framework etc. Strong mathematical skills to help collect, measure, organize and analyze data. Knowledge of programming languages like SQL, Oracle, R, MATLAB, and Python Technical proficiency regarding database design development, data models, techniques for data mining, and segmentation. Experience in handling reporting packages like Business Objects, programming (JavaScript, XML, or ETL frameworks), databases Proficiency in statistics and statistical packages like Excel, SPSS, SAS to be used for data set analysis. Adept at using data processing platforms like Hadoop and Apache Spark, Hadoop opesource data analytics Capacity to develop and document procedures and workflows. Ability to carry out data quality control, validation, and linkagAbility to produce clear graphical representations and data visualizations. Managing and designing the reporting environment, including data sources, security, and metadata Providing technical expertise in data storage structures, data mining, and data cleansing. Knowledge of data visualization software like Qlik, Power BI Problem solving & team working skills. Accuracy and attention to detail Adept at queries, writing reports, and making presentations. Proven working experience in data analysis. Pre-requisites Bachelors degree from an accredited university or college in computer science. Work experience as a data analyst or in a related field. Ability to work with stakeholders to assess potential risks. Ability to analyze existing tools and databases and provide software solution recommendations. Ability to translate business requirements into non-technical, lay terms. High-level experience in methodologies and processes for managing large-scale databases. Demonstrated experience in handling large data sets and relational databases. Understanding of addressing and metadata standards. High-level written and verbal communication skills.

Posted 1 week ago

Apply

4.0 - 11.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 4yrs to 11 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements

Posted 1 week ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

Gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career, Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express, Join Team Amex and let's lead the way together, From building next-generation apps and microservices in Kotlin to using AI to help protect our franchise and customers from fraud, you could be doing entrepreneurial work that brings our iconic, global brand into the future As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power our digital systems, services, products and platforms If you love to work with APIs, contribute to open source, or use the latest technologies, well support you with an open environment and learning culture Function Description: American Express is looking for energetic, successful and highly skilled Engineers to help shape our technology and product roadmap Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day Today, innovative ideas, insight and new points of view are at the core of how we create a more powerful, personal and fulfilling experience for our customers and colleagues, with batch/real-time analytical solutions using ground-breaking technologies to deliver innovative solutions across multiple business units, This Engineering role is based in our Global Risk and Compliance Technology organization and will have a keen focus on platform modernization, bringing to life the latest technology stacks to support the ongoing needs of the business as well as compliance against global regulatory requirements Qualifications: Support the Compliance and Operations Risk data delivery team in India to lead and assist in the design and actual development of applications, Responsible for specific functional areas within the team, this involves project management and taking business specifications, The individual should be able to independently run projects/tasks delegated to them, Technology Skills: Bachelor degree in Engineering or Computer Science or equivalent 2 to 5 years experience is required GCP professional certification Data Engineer Expert in Google BigQuery tool for data warehousing needs, Experience on Big Data (Spark Core and Hive) preferred Familiar with GCP offerings, experience building data pipelines on GCP a plus Hadoop Architecture, having knowledge on Hadoop, Map Reduce, Hbase, UNIX shell scripting experience is good to have Creative problem solving (Innovative) We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show

Posted 1 week ago

Apply

12.0 - 17.0 years

37 - 40 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Minimum 12+ years of relevant experience in building software applications in data and analytics field Enhance the go-to-market strategy by designing new and relevant solution frameworks to accelerate our clients journeys for impacting patient outcomes. Pitch for these opportunities and craft winning proposals to grow the Data Science Practice. Build and lead a team of data scientists and analysts, fostering a collaborative and innovative environment. Oversee the design and delivery of the models, ensuring projects are completed on time and meet business objectives. Engaging in consultative selling with clients to grow/deliver business. Develop and operationalize scalable processes to deliver on large & complex client engagements. Extensive hands-on experience with Python, R, or Julia, focusing on data science and generative AI frameworks. Expertise in working with generative models such as GPT, DALL-E, Stable Diffusion, Codex, and MidJourney for various applications. Proficiency in fine-tuning and deploying generative models using libraries like Hugging Face Transformers, Diffusers, or PyTorch Lightning. Strong understanding of generative techniques, including GANs, VAEs, diffusion models, and autoregressive models. Experience in prompt engineering, zero-shot, and few-shot learning for optimizing generative AI outputs across different use cases. Expertise in managing generative AI data pipelines, including preprocessing large-scale multimodal datasets for text, image, or code generation. Location - Gurugram,Bengaluru,Hyderabad,Pune,Noida,India

Posted 1 week ago

Apply

2.0 - 6.0 years

9 - 13 Lacs

Pune

Work from Office

Join us as a Dev Lead at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions You'll be working on complex technical problems that will involve detailed analytical skills and analysis This will be done in conjunction with fellow engineers, business analysts and business stakeholders, To be successful as a Dev Lead you should have experience with: Responsible for Creating Standards and Guidelines for ETL development, Designing effective reusable components to improve the development productivity, Optimise the ETLs to reduce the overall execution time, Understand the functionality of the requirements and design data models and ETLs, Develop and Manage the Technical Lineage in Metadatahub, Complex hands-on work on Ab Initio, Spark, Scala, HDFS, Troubleshooting and development in other Ab Initio products like Continuous Flow, Express It, Query It, Metadata Hub, Control Centre etc Troubleshooting related SQL and NOSQL databases like Teradata, Oracle, MongoDB, Writing complex SQL for data processing and data analysis, Understand concepts of Hadoop, Spark, Scala and tune A>I product accordingly, Troubleshoot any ETL related problem/error, Provide expertise and continuity across both the business solution and technical solution involving Ab Initio, Builds effective relationships with all stakeholders including federated project teams, BU SPOC, GIS etc Excellent problem solving skills and provide innovative solutions to technical problems, Accountability, Decision making and problem solving, Work with the project development team in Pune and escalate any technical issues, roadblocks or project risks to the Program Delivery Lead, Develop the initial framework or platform of the solution which each developer will use to build the solution, Conduct code reviews with other developers within the development team, Work with the technology teams and take responsibility for the timely implementation of the solution, Writing technical design documents adhering to predefined standards, Tuning existing workflow/new workflows to ensure acceptable application performance, Some Other Highly Valued Skills Include Awareness or hands on experience working on Cloud technologies, Understanding of Big Data Technologies Cloudera, Hadoop, Spark, Awareness of various testing and analytical methodologies and tools, Able to solve problems with data to understand relevance of it to the wider objective, Banking knowledge preferred, You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills, This role is based in Pune, Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues, Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools Ensuring that code is scalable, maintainable, and optimized for performance, Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives, Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing, Stay informed of industry technology trends and innovations and actively contribute to the organizations technology communities to foster a culture of technical excellence and growth, Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions, Implementation of effective unit testing practices to ensure proper code design, readability, and reliability, Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness Collaborate closely with other functions/ business divisions, Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard The four LEAD behaviours are: L Listen and be authentic, E Energise and inspire, A Align across the enterprise, D Develop others, OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes, Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues, Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda, Take ownership for managing risk and strengthening controls in relation to the work done, Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function, Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy, Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc) to solve problems creatively and effectively, Communicate complex information 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience, Influence or convince stakeholders to achieve outcomes, All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship our moral compass, helping us do what we believe is right They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge and Drive the operating manual for how we behave, Show

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Big Data Engineer (AWS-Scala Specialist) Location: Greater Noida/Hyderabad Experience: 5-10 Years About the Role- We are seeking a highly skilled Senior Big Data Engineer with deep expertise in Big Data technologies and AWS Cloud Services. The ideal candidate will bring strong hands-on experience in designing, architecting, and implementing scalable data engineering solutions while driving innovation within the team. Key Responsibilities- Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing. Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets. Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus). Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3. Work on real-time streaming solutions using Kafka and AWS Kinesis. Support ML model operationalization on AWS (deployment, scheduling, and monitoring). Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs. Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo. Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability. Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions. Mandatory Skills & Qualifications- ✅ 5+ years of solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop and Spark Mandatory) ✅ Proven expertise in Spark with Scala ✅ Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation) Share your resume at Aarushi.Shukla@coforge.com if you have experience with mandatory skills and you are an early.

Posted 1 week ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Senior Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 5-7 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for Pyspark Developer to join our dynamic team and embark on a rewarding career journey Designing and developing big data applications using the PySpark framework to meet the needs of the business Writing and optimizing Spark SQL statements to extract and manipulate large datasets Developing and deploying Spark algorithms to perform data processing and analytics tasks, such as machine learning and graph processing Debugging and troubleshooting Spark code to resolve any issues and improve the performance of the applications Collaborating with cross-functional teams, such as data engineers and data analysts, to ensure that the PySpark applications are integrated with other systems Creating and maintaining documentation to ensure that the big data architecture, design, and functionality are well understood by others Should be detail-oriented, have excellent problem-solving and communication skills

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies