Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
12 - 16 Lacs
Gurugram
Remote
The Data engineer is responsible for managing and operating upon Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description: Provides Level 3 operational coverage: Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed / Entitlements Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. SKILLS Good hands on Tableau, Tableau bridge server, Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I. Ability to read and write sql and stored procedures. Good hands on experience in configuring, managing and troubleshooting along with general analytical and problem solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. JOB COMPLEXITY: This role requires extensive problem solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. EXPERIENCE/EDUCATION: Requires a Bachelors degree in computer science or other related field plus 8+ years of hands-on experience in configuring and managing AWS/tableau and databricks solution. Experience with Databricks and tableau environment is desired.
Posted 1 week ago
7.0 - 12.0 years
14 - 18 Lacs
Gurugram, Bengaluru
Work from Office
Summary The Data engineer is responsible for managing and operating upon Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerBI/Tableau . The engineer will work closely with the customer and team to manage and operate cloud data platform. Job Description Leads Level 4 operational coverage: Resolving pipeline issues / Proactive monitoring for sensitive batches / RCA and retrospection of issues and documenting defects. Design, build, test and deploy fixes to non-production environment for Customer testing. Work with Customer to deploy fixes on production upon receiving Customer acceptance of fix. Cost / Performance optimization and Audit / Security including any associated infrastructure changes Troubleshooting incident/problem, includes collecting logs, cross-checking against known issues, investigate common root causes (for example failed batches, infra related items such as connectivity to source, network issues etc.) Knowledge Management: Create/update runbooks as needed . Governance: Watch all the configuration changes to batches and infrastructure (cloud platform) along with mapping it with proper documentation and aligning resources. Communication: Lead and act as a POC for customer from off-site, handling communication, escalation, isolating issues and coordinating with off-site resources while level setting expectation across stakeholders Change Management: Align resources for on-demand changes and coordinate with stakeholders as required Request Management: Handle user requests if the request is not runbook-based create a new KB or update runbook accordingly Incident Management and Problem Management, Root cause Analysis, coming up with preventive measures and recommendations such as enhancing monitoring or systematic changes as needed. Skill Good hands-on Databricks, Dbt, SSRS, SSIS, AWS DWS, AWS APP Flow, PowerB I/ Tableau Ability to read and write sql and stored procedures. Good hands-on experience in configuring, managing and troubleshooting along with general analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to communicate technical info and ideas so others will understand. Ability to successfully work and promote inclusiveness in small groups. Experience/Education Requires a Bachelors degree in computer science or other related field plus 10+ years of hands-on experience in configuring and managing AWS/tableau and databricks solutions. Experience with Databricks and tableau environment is desired. JOb Complexity This role requires extensive problem-solving skills and the ability to research an issue, determine the root cause, and implement the resolution; research of various sources such as databricks/AWS/tableau documentation that may be required to identify and resolve issues. Must have the ability to prioritize issues and multi-task. Work Location - Remote Work From Home Shift: Rotation Shifts (24/7)
Posted 1 week ago
4.0 - 7.0 years
25 - 30 Lacs
Ahmedabad
Work from Office
ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description About Sutherland: Artificial Intelligence. Automation. Cloud engineering. Advanced analytics. For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA. We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results Job Description Sutherland is seeking an attentive and technical person to join us as an Sr Developer – SQL & ETL . The person should have hands-on experience in SQL and ETL techniques and frameworks. We are a group of dynamic and driven individuals. If you are looking to build a fulfilling career and are confident you have the skills and experience to help us succeed, we want to work with you! Responsibilities: Keep management updated: Should update the management on the status Impact the bottom line: Produce solid and effective strategies based on accurate and meaningful data reports and analysis and/or keen observations Define Sutherland’s reputation: Oversee and manage performance and service quality to guarantee customer satisfaction Strengthen relationships: Establish and maintain communication with clients and/or team members; understand needs, resolve issues, and meet expectations Take the lead: Able to debug the production issue quickly and provide the root cause analysis with resolution for the issue. Qualifications To succeed in this position, you must: This technical position will be responsible for Data migration, Database Development and on-going support of an enterprise data warehouse, data marts and supporting systems/applications. Hands on experience in Microsoft SQL Server, SQL Server Integration Services (SSIS) Very Good Experience in Data Analysis, Data mapping, Database design, Data migrations, SQL performance tuning. Implementing ETL process with multiple sources systems such as SQL, Files, Mail etc using SQL Server Data Tools. Efficiently Designing & Developing Database Objects like tables, stored procedures, Functions, Index, Cursors, Analytical functions, SQL Scripts, Packages, etc. Monitoring performance and advising any necessary infrastructure changes Fine tune SQL applications for high performance and throughput. Support QA (resolve issues and release fixes) UAT and production support when required. Deployment and integration testing of developed components in Development and Test environments. Years of Experience: 6+ Educational Qualification: B.E / B.Tech / M.C.A Additional Information Additional responsibilities: Knowledge of Hadoop ecosystem and its components –Hive, Impala, Sqoop, Oozie, etc. Basic Linux commands Knowledge of scripting languages like Python or Shell scripting. Experience with integration of data from multiple data sources Analytical and problem-solving skills. Should Have strong verbal and written communication skills; be able to communicate in a clear, constructive, and professional manner. US Health Care Knowledge. Reporting, Data modelling, Data Warehousing Any other ETL tools apart from SSIS.
Posted 1 week ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Visakhapatnam
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Surat
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Varanasi
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Ludhiana
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Lucknow
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Agra
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Vadodara
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Faridabad
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Jaipur
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Nagpur
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
3.0 - 6.0 years
8 - 13 Lacs
Bengaluru
Work from Office
KPMG India is looking for Azure Data Engieer - Assistant Manager Azure Data Engieer - Assistant Manager to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
6.0 - 9.0 years
5 - 10 Lacs
Noida
Work from Office
Relevant experience and skills: Must haves: At least 6-9 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book
Posted 1 week ago
5.0 - 9.0 years
12 - 17 Lacs
Noida
Work from Office
Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing
Posted 1 week ago
4.0 - 8.0 years
10 - 14 Lacs
Chennai
Work from Office
Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.
Posted 1 week ago
4.0 - 8.0 years
8 - 12 Lacs
Pune
Work from Office
Piller Soft Technology is looking for Lead Data Engineer to join our dynamic team and embark on a rewarding career journey Designing and developing data pipelines: Lead data engineers are responsible for designing and developing data pipelines that move data from various sources to storage and processing systems. Building and maintaining data infrastructure: Lead data engineers are responsible for building and maintaining data infrastructure, such as data warehouses, data lakes, and data marts. Ensuring data quality and integrity: Lead data engineers are responsible for ensuring data quality and integrity, by setting up data validation processes and implementing data quality checks. Managing data storage and retrieval: Lead data engineers are responsible for managing data storage and retrieval, by designing and implementing data storage systems, such as NoSQL databases or Hadoop clusters. Developing and maintaining data models: Lead data engineers are responsible for developing and maintaining data models, such as data dictionaries and entity-relationship diagrams, to ensure consistency in data architecture. Managing data security and privacy: Lead data engineers are responsible for managing data security and privacy, by implementing security measures, such as access controls and encryption, to protect sensitive data. Leading and managing a team: Lead data engineers may be responsible for leading and managing a team of data engineers, providing guidance and support for their work.
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Minimum of 4+ years of software development experience with demonstrated expertise in standard development best practice methodologies SKILLS REQUIRED: Spark, Scala, Python, HDFS, Hive, , Scheduler ( Ozzie, Airflow),Kafka Spark/Scala SQL RDBMS DOCKER KUBERNETES RABBITMQ/KAFKA MONITORING TOOLS - SPLUNK OR ELK Profile required Integrate test frameworks in development process Refactor existing solutions to make it reusable and scalable - Work with operations to get the solutions deployed Take ownership of production deployment of code Collaborating with and/or lead cross functional teams, build and launch applications and data platforms at scale, either for revenue generating or operational purposes *Come up with Coding and Design best practices *Thrive in self-motivated internal-innovation driven environment Adapting fast to new application knowledge and changes
Posted 1 week ago
4.0 - 7.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Design, development and testing of components / modules in TOP (Trade Open Platform) involving Spark, Java, Hive and related big-data technologies in a datalake architecture Contribute to the design, development and deployment of new features new components in Azure public cloud Contribute to the evolution of REST APIs in TOP enhancement, development and testing of new APIs Ensure the processes in TOP provide an optimal performance and assist in performance tuning and optimization Release Deployment Deploy using CD/CI practices and tools in various environments development, UAT and production and follow production processes. Ensure Craftsmanship practices are followed Follow Agile at Scale process in terms of participation in PI Planning and follow-up, Sprint planning, Back-log maintenance in Jira. Organize training sessions on the core platform and related technologies for the Tribe / Business line to ensure the platform evolution is continuously updated to relevant stakeholders
Posted 1 week ago
6.0 - 8.0 years
1 - 4 Lacs
Chennai
Hybrid
3+ years of experience as a Snowflake Developer or Data Engineer. Strong knowledge of SQL, SnowSQL, and Snowflake schema design. Experience with ETL tools and data pipeline automation. Basic understanding of US healthcare data (claims, eligibility, providers, payers). Experience working with largescale datasets and cloud platforms (AWS, Azure,GCP). Familiarity with data governance, security, and compliance (HIPAA, HITECH).
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Noida
Work from Office
Relevant experience and skills: Must haves: At least 3-5 years of work experience in US and overseas payroll. Understanding of customer invoicing and timesheet management Quick learner & presentation skill Strong sense of urgency and results-orientation MS Office Advanced Excel and good Power point Acquaint with different client portals like wand, Fieldglass, Beeline, Coupa, Ariba Good to have: Experience of Background in IT staffing business ERP working knowledge Quick Book
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough