Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Data Engineer Primary Skill: Python, Pyspark Mandatory and AWS Services and pipelines. Location: Hyderabad/ Pune/ Coimbatore Experience: 2 - 4 years of experience Job Summary: We are looking for a Lead Data Engineer who will be responsible for building AWS Data pipelines as per requirements. Should have strong analytical skills, design capabilities, problem solving skills. Based on stakeholders’ requirements, should be able to propose solutions to the customer for review. Discuss pros/cons of different solution designs and optimization strategies. Responsibilities: Provide technical and development support to clients to build and maintain data pipelines. Develop data mapping documents listing business and transformational rules. Develop, unit test, deploy and maintain data pipelines. Design a Storage Layer for storing tabular/semi-structured/unstructured data. Design pipelines for batch/real-time processing of large data volumes. Analyze source specifications and build data mapping documents. Identify and document applicable non-functional code sets and reference data across insurance domains. Understand profiling results and validate data quality rules. Utilize data analysis tools to construct and manipulate datasets to support analyses. Collaborate with and support Quality Assurance (QA) in building functional scenarios and validating results. Requirements: 2+ years’ experience developing and maintaining modern ingestion pipeline using technologies like (AWS pipelines, Lamda, Spark, Apache Nifi etc). Basic understanding of the MLOPs lifecycle (Data prep -> model training -> model deployment -> model inference -> model re-training). Should be able to design data pipelines for batch/real time using Lambda, Step Functions, API Gateway, SNS, S3. Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift & Jupyter Notebooks. Requirements Gathering - Active involvement during requirements discussions with project sponsors, defining the project scope and delivery timelines, Design & Development. Strong in Spark Scala & Python pipelines (ETL & Streaming). Strong experience in metadata management tools like AWS Glue. Strong experience in coding with languages like Java, Python. Good-to-have AWS Developer certified. Good-to-have Postman-API and Apache Airflow or similar schedulers experience. Working with cross-functional teams to meet strategic goals. Experience in high volume data environments. Critical thinking and excellent verbal and written communication skills. Strong problem-solving and analytical abilities should be able to work and deliver individually. Good knowledge of data warehousing concepts. Desired Skill Set : Lambda, Step Functions, API Gateway, SNS, S3 (unstructured data), DynamoDB (semi-structured data), Aurora PostgreSQL (tabular data), AWS Sagemaker, AWS CodeCommit/GitLab, AWS CodeBuild, AWS Code Pipeline, AWS ECR . Aboutthe Company: ValueMomentum is amongst the fastestgrowing insurance-focused IT services providersin North America.Leading insurers trust ValueMomentum with their core, digital and data transformation initiatives. Having grown consistently every year by 24%, we have now grown to over 4000 employees. ValueMomentum is committed to integrity and to ensuring that each team and employee is successful. We foster an open work culture where employees' opinions are valued. We believe in teamwork and cultivate a sense of fun, fellowship, and pride among our employees. Benefits: We at ValueMomentum offer you the opportunity to grow by working alongside the experts. Some of the benefits you can avail are: Competitive compensation package comparable to the best in the industry. Career Advancement : Individual Career Development, coaching and mentoring programs for professional and leadership skill development. Comprehensive training and certification programs. Performance Management : Goal Setting, continuous feedback and year-end appraisal. Reward & recognition for the extraordinary performers. Benefits : Comprehensive health benefits, wellness and fitness programs. Paid time off and holidays. Culture : A highly transparent organization with an open-door policy and a vibrant culture
Posted 1 month ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Preferable - Tamilnadu Candidates Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive.
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
The Data Engineer is responsible for designing, developing, and maintaining data pipelines and architectures. This role works closely with data scientists, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for various analytical and operational needs. Develop, construct, test, and maintain data architectures, including databases and large-scale processing systems. Assemble large, complex data sets that meet functional and non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability. Build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources. Collaborate with data scientists and analysts to support their data needs and ensure data accuracy and integrity. Monitor and troubleshoot data pipelines and workflows to ensure smooth operation. Execute steady state operating and monitoring procedures for our data warehouse and periodic 24x7 on-call support as necessary What You Bring to the Table Bachelor’s Degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or a related field. Proficiency in SQL and experience with relational databases. Basic experience with big data tools (e.g., Hadoop, Spark) and data pipeline tools (e.g., Apache Nifi, Airflow) and programming languages (Java, Python, ABAP, SQL) Strong problem-solving skills and attention to detail. Excellent communication skills and the ability to work collaboratively in a team environment. Language Proficiency: fluent in English, both spoken and writte Brown-Forman Corporation is committed to equality of opportunity in all aspects of employment. It is the policy of Brown-Forman Corporation to provide full and equal employment opportunities to all employees and potential employees without regard to race, color, religion, national or ethnic origin, veteran status, age, gender, gender identity or expression, sexual orientation, genetic information, physical or mental disability or any other legally protected status. Business Area: Global Information Technology Function: IT City: Gurgaon State: Haryana Country: IND Req ID: JR-00009027
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Lead Software Test Engineer (Automation Tester) Lead SDET Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Job Overview: As part of an exciting, fast paced environment developing payment authentication and security solutions, this position will provide technical leadership and expertise within the development lifecycle for ecommerce payment authentication platform under Authentication program for Digital Overview: We are looking for an Automation Tester to join the PVS Identity Solutions team. This is a pivotal role, responsible for QA, Loading Testing and Automation various data-driven pipelines. The position involves managing testing infrastructure for Functional test, Automation and co-ordination of testing that spans multiple programs and projects. The ideal candidate will have experience working with large-scale data and automation testing of Java, Cloud Native application/services. Position will lead the development and maintenance of automated testing frameworks Provide technical leadership for new major initiatives Deliver innovative, cost-effective solutions which align to enterprise standards Drive the reduction of time spent testing Work to minimize manual testing by identifying high-ROI test cases and automating them Be an integrated part of an Agile engineering team, working interactively with software engineer leads, architects, testing engineers, and product managers from the beginning of the development cycle Help ensure functionality delivered in each release is fully tested end to end Manage multiple priorities and tasks in a dynamic work environment All About You Bachelor’s degree in computer science or equivalent work experience with hands on technical and quality engineering skills Expertise in testing methods, standards, and conventions including automation and test case creation Excellent technical acumen, strong organizational and problem-solving skills with great attention to detail, critical thinking, solid communication, and proven leadership skills Solid leadership and mentoring skills with the ability to drive change Experience in testing ETL processes Experience in Testing Automation Frameworks and agile Knowledge of Python/Hadoop/Spark, Java, SQLs, APIs (REST/SOAP), code reviews, scanning tools and configuration, and branching techniques Experience with application monitoring tools such as Dynatrace and Splunk Experience with Chaos, software security, and crypto testing practices Experience with Performance Testing Experience with DevOps practices (continuous integration and delivery, and tools such as Jenkins) Nice to have knowledge or prior experience with any of the following Apache Kafka, Apache Spark with Scala Orchestration with Apache Nifi, Apache Airflow Microservices architecture Build tools like Jenkins Good to have - Mobile Testing skills Working with large data sets with terabytes of data Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-249227
Posted 1 month ago
0 years
0 Lacs
Pune
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Technical leadership for a team of engineers that focus on development, deployment and operations Lead and contribute to multiple pods with moderate resource requirements, risk, and/or complexity Interface technically with a range of stakeholders with customer and business impact Leading others to solve complex problems Experience in working within an agile, multidisciplinary DevOps team Migrate and re-engineer existing services from on-premises data centers to Cloud (GCP/AWS) Understanding the business requirements and provide the real-time solutions Following the project development tools like JIRA, Confluence and GIT Write python/shell scripts to automate operations and server management Build and maintain operations tools for monitoring, notifications, trending, and analysis. Define, create, test, and execute operations procedures. Document current and future configuration processes and policies Requirements To be successful in this role, you should meet the following requirements: Mandatory 3+ hands on working experience on Apache NiFi + Apache Kafka Mandatory Google Cloud Big Query Scripting Skills Having working experience on Scala/Java is added advantage Mandatory SQL PL/SQL Scripting Experience Mandatory anyone of Python/Linux/Unix Skills www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
162.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job:- Develop and maintain automated test scripts using UFT (Unified Functional Testing) to ensure the quality of web, desktop, and enterprise applications. Job Title: : Kafka Sr Developer Experience Required – 9+ Relevant Years Notice Period – Immediate joiner only Education – Engineering, or a related field. Location - Hyderabad only JD Python | MS SQL | Java | Azure Databricks | Spark | Kenisis | Kafka | Sqoop | Hive | Apache NiFi | Unix Shell Scripting The person should be able to work with business team, understand the requirement, work on the design & development (hands-on), support testing, go-live & hypercare phases. Person should also act a s a mentor and guide the offshore Medronic Kafka developer where he/she can review the work and take the ownership of the deliverables.
Posted 1 month ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position: Database Developer Exp: 4 to 8 yrs Manadatory Skills: SQL, Informatica, ETL Serving NP candidates who can join in July month can apply for this Role Job Summary: We are seeking a highly skilled Database Developer with strong expertise in SQL and ETL processes to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust data pipelines and ensuring efficient data storage and access across various systems. Key Responsibilities: Develop, test, and maintain SQL queries, stored procedures, and database objects. Design and implement ETL workflows to extract, transform, and load data from multiple sources. Optimize existing database queries for performance and scalability. Collaborate with data analysts, software developers, and business stakeholders to understand data requirements. Ensure data integrity, accuracy, and consistency across systems. Monitor and troubleshoot ETL jobs and perform root cause analysis of failures. Participate in data modeling and schema design activities. Maintain technical documentation and adhere to best practices in database development. Required Skills & Qualifications: Proven experience (4+ years) as a Database Developer or in a similar role. Strong proficiency in writing complex SQL queries , procedures , and performance tuning . Hands-on experience with ETL tools (e.g., Informatica, Talend, SSIS, Apache Nifi, etc.). Solid understanding of relational database design , normalization , and data warehousing concepts. Experience with RDBMS platforms such as SQL Server , PostgreSQL , Oracle , or MySQL . Ability to analyze and interpret complex datasets and business requirements. Familiarity with data governance , data quality , and data security best practice
Posted 1 month ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Testing/Quality Assurance Main location: India, Karnataka, Bangalore Position ID: J0525-1991 Employment Type: Full Time Position Description: Job Title: ETL Testing Position: Test Engineer Experience: 4- 7 Years Category: Software Development/ Engineering Shift: 1PM to 1PM Main location: India, Karnataka, Bangalore Position ID: J0525-1991 Employment Type: Full Time We are looking for a skilled ETL Tester with hands-on experience in SQL and Python to join our Quality Engineering team. The ideal candidate will be responsible for validating data pipelines, ensuring data quality, and supporting the end-to-end ETL testing lifecycle in a fast-paced environment. Your future duties and responsibilities: Design, develop, and execute test cases for ETL workflows and data pipelines. Perform data validation and reconciliation using advanced SQL queries. Use Python for automation of test scripts, data comparison, and validation tasks. Work closely with Data Engineers and Business Analysts to understand data transformations and business logic. Perform root cause analysis of data discrepancies and report defects in a timely manner. Validate data across source systems, staging, and target data stores (e.g., Data Lakes, Data Warehouses). Participate in Agile ceremonies, including sprint planning and daily stand-ups. Maintain test documentation including test plans, test cases, and test results. Required qualifications to be successful in this role: 5+ years of experience in ETL/Data Warehouse testing. Strong proficiency in SQL (joins, aggregations, window functions, etc.). Experience in Python scripting for test automation and data validation. Hands-on experience with tools like Informatica, Talend, Apache NiFi, or similar ETL tools. Understanding of data models, data marts, and star/snowflake schemas. Familiarity with test management and bug tracking tools (e.g., JIRA, HP ALM). Strong analytical, debugging, and problem-solving skills. Good to Have: Exposure to Big Data technologies (e.g., Hadoop, Hive, Spark). Experience with Cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of CI/CD tools and automated data testing frameworks. Experience working in Agile/Scrum teams. Skills: Jira SQLite Banking ETL Python What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dear Candidate! Greetings from TCS !!! Role: Redhat Linux Administrator Location: Bangalore/Chennai/Hyderabad/Mumbai/Indore Experience Range: 5 to 12 Years Job Description: Experience in services like HDFS, Sqoop, NiFi, Hive, HBase and Linux shell scripting. Experience in security related components like Kerberos, Ranger. Obtain and analyze business requirements and document technical solutions• Leadership skills in technical initiatives• Generating detailed technical documentation• Communications that clearly articulate solutions and the ability to perform demonstrations TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together.
Posted 1 month ago
5.0 years
0 Lacs
Delhi
On-site
The Role Context: This is an exciting opportunity to join a dynamic and growing organization, working at the forefront of technology trends and developments in social impact sector. Wadhwani Center for Government Digital Transformation (WGDT) works with the government ministries and state departments in India with a mission of “ Enabling digital transformation to enhance the impact of government policy, initiatives and programs ”. We are seeking a highly motivated and detail-oriented individual to join our team as a Data Engineer with experience in the designing, constructing, and maintaining the architecture and infrastructure necessary for data generation, storage and processing and contribute to the successful implementation of digital government policies and programs. You will play a key role in developing, robust, scalable, and efficient systems to manage large volumes of data, make it accessible for analysis and decision-making and driving innovation & optimizing operations across various government ministries and state departments in India. Key Responsibilities: a. Data Architecture Design : Design, develop, and maintain scalable data pipelines and infrastructure for ingesting, processing, storing, and analyzing large volumes of data efficiently. This involves understanding business requirements and translating them into technical solutions. b. Data Integration: Integrate data from various sources such as databases, APIs, streaming platforms, and third-party systems. Should ensure the data is collected reliably and efficiently, maintaining data quality and integrity throughout the process as per the Ministries/government data standards. c. Data Modeling: Design and implement data models to organize and structure data for efficient storage and retrieval. They use techniques such as dimensional modeling, normalization, and denormalization depending on the specific requirements of the project. d. Data Pipeline Development/ ETL (Extract, Transform, Load): Develop data pipeline/ETL processes to extract data from source systems, transform it into the desired format, and load it into the target data systems. This involves writing scripts or using ETL tools or building data pipelines to automate the process and ensure data accuracy and consistency. e. Data Quality and Governance: Implement data quality checks and data governance policies to ensure data accuracy, consistency, and compliance with regulations. Should be able to design and track data lineage, data stewardship, metadata management, building business glossary etc. f. Data lakes or Warehousing: Design and maintain data lakes and data warehouse to store and manage structured data from relational databases, semi-structured data like JSON or XML, and unstructured data such as text documents, images, and videos at any scale. Should be able to integrate with big data processing frameworks such as Apache Hadoop, Apache Spark, and Apache Flink, as well as with machine learning and data visualization tools. g. Data Security : Implement security practices, technologies, and policies designed to protect data from unauthorized access, alteration, or destruction throughout its lifecycle. It should include data access, encryption, data masking and anonymization, data loss prevention, compliance, and regulatory requirements such as DPDP, GDPR, etc. h. Database Management: Administer and optimize databases, both relational and NoSQL, to manage large volumes of data effectively. i. Data Migration: Plan and execute data migration projects to transfer data between systems while ensuring data consistency and minimal downtime. a. Performance Optimization : Optimize data pipelines and queries for performance and scalability. Identify and resolve bottlenecks, tune database configurations, and implement caching and indexing strategies to improve data processing speed and efficiency. b. Collaboration: Collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with access to the necessary data resources. They also work closely with IT operations teams to deploy and maintain data infrastructure in production environments. c. Documentation and Reporting: Document their work including data models, data pipelines/ETL processes, and system configurations. Create documentation and provide training to other team members to ensure the sustainability and maintainability of data systems. d. Continuous Learning: Stay updated with the latest technologies and trends in data engineering and related fields. Should participate in training programs, attend conferences, and engage with the data engineering community to enhance their skills and knowledge. Desired Skills/ Competencies Education: A Bachelor's or Master's degree in Computer Science, Software Engineering, Data Science, or equivalent with at least 5 years of experience. Database Management: Strong expertise in working with databases, such as SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra). Big Data Technologies: Familiarity with big data technologies, such as Apache Hadoop, Spark, and related ecosystem components, for processing and analyzing large-scale datasets. ETL Tools: Experience with ETL tools (e.g., Apache NiFi, Talend, Apache Airflow, Talend Open Studio, Pentaho, Infosphere) for designing and orchestrating data workflows. Data Modeling and Warehousing: Knowledge of data modeling techniques and experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake). Data Governance and Security: Understanding of data governance principles and best practices for ensuring data quality and security. Cloud Computing: Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services for scalable and cost-effective data storage and processing. Streaming Data Processing: Familiarity with real-time data processing frameworks (e.g., Apache Kafka, Apache Flink) for handling streaming data. KPIs: Data Pipeline Efficiency: Measure the efficiency of data pipelines in terms of data processing time, throughput, and resource utilization. KPIs could include average time to process data, data ingestion rates, and pipeline latency. Data Quality Metrics: Track data quality metrics such as completeness, accuracy, consistency, and timeliness of data. KPIs could include data error rates, missing values, data duplication rates, and data validation failures. System Uptime and Availability: Monitor the uptime and availability of data infrastructure, including databases, data warehouses, and data processing systems. KPIs could include system uptime percentage, mean time between failures (MTBF), and mean time to repair (MTTR). Data Storage Efficiency: Measure the efficiency of data storage systems in terms of storage utilization, data compression rates, and data retention policies. KPIs could include storage utilization rates, data compression ratios, and data storage costs per unit. Data Security and Compliance: Track adherence to data security policies and regulatory compliance requirements such as DPDP, GDPR, HIPAA, or PCI DSS. KPIs could include security incident rates, data access permissions, and compliance audit findings. Data Processing Performance: Monitor the performance of data processing tasks such as ETL (Extract, Transform, Load) processes, data transformations, and data aggregations. KPIs could include data processing time, CPU usage, and memory consumption. Scalability and Performance Tuning: Measure the scalability and performance of data systems under varying workloads and data volumes. KPIs could include scalability benchmarks, system response times under load, and performance improvements achieved through tuning. Resource Utilization and Cost Optimization: Track resource utilization and costs associated with data infrastructure, including compute resources, storage, and network bandwidth. KPIs could include cost per data unit processed, cost per query, and cost savings achieved through optimization. Incident Response and Resolution: Monitor the response time and resolution time for data-related incidents and issues. KPIs could include incident response time, time to diagnose and resolve issues, and customer satisfaction ratings for support services. Documentation and Knowledge Sharing : Measure the quality and completeness of documentation for data infrastructure, data pipelines, and data processes. KPIs could include documentation coverage, documentation update frequency, and knowledge sharing activities such as internal training sessions or knowledge base contributions. Years of experience of the current role holder New Position Ideal years of experience 3 – 5 years Career progression for this role CTO WGDT (Head of Incubation Centre) ******************************************************************************* Wadhwani Corporate Profile: (Click on this link) Our Culture: WF is a global not-for-profit, and works like a start-up, in a fast-moving, dynamic pace where change is the only constant and flexibility is the key to success. Three mantras that we practice across job roles, levels, functions, programs and initiatives, are Quality, Speed, Scale, in that order. We are an ambitious and inclusive organization, where everyone is encouraged to contribute and ideate. We are intensely and insanely focused on driving excellence in everything we do. We want individuals with the drive for excellence, and passion to do whatever it takes to deliver world class outcomes to our beneficiaries. We set our own standards often more rigorous than what our beneficiaries demand, and we want individuals who love it this way. We have a creative and highly energetic environment – one in which we look to each other to innovate new solutions not only for our beneficiaries but for ourselves too. Open to collaborate with a borderless mentality, often going beyond the hierarchy and siloed definitions of functional KRAs, are the individuals who will thrive in our environment. This is a workplace where expertise is shared with colleagues around the globe. Individuals uncomfortable with change, constant innovation, and short learning cycles and those looking for stability and orderly working days may not find WF to be the right place for them. Finally, we want individuals who want to do greater good for the society leveraging their area of expertise, skills and experience. The foundation is an equal opportunity firm with no bias towards gender, race, colour, ethnicity, country, language, age and any other dimension that comes in the way of progress. Join us and be a part of us! Bachelors in Technology / Masters in Technology
Posted 1 month ago
4.0 years
4 - 7 Lacs
Bengaluru
On-site
Minimum Required Experience : 4 years Full Time Skills ETL python Description ETL Developer We are seeking an experienced ETL Developer to join our dynamic team. The ideal candidate will be responsible for designing and implementing ETL processes to extract, transform, and load data from various sources, including databases, APIs, and flat files. Duties and Responsibilities Design and implement ETL processes to extract, transform, and load data from various sources. Monitor and optimize ETL processes for performance and efficiency. Document ETL processes and maintain technical specifications. Qualifications 4-8 years of experience in ETL development. Proficiency in ETL tools and frameworks such as Apache NiFi, Talend, or Informatica. Strong programming skills in Python. Experience with data warehousing concepts and methodologies. Preferred certifications in relevant ETL tools or data engineering.
Posted 1 month ago
4.0 - 8.0 years
0 - 1 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Preferred candidate profile Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow, and other components required in building end-to-end data pipelines. Bachelor’s degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on Azure Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work.
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer Job Summary As a Senior Software Engineer focused on Data Quality, you will lead the design, development, and deployment of scalable data quality frameworks and pipelines. You will work closely with data engineers, analysts, and business stakeholders to build robust solutions that validate, monitor, and improve data quality across large-scale distributed systems. Key Responsibilities Lead the design and implementation of data quality frameworks and automated validation pipelines using Python, Apache Spark, and Hadoop ecosystem tools. Develop, deploy, and maintain scalable ETL/ELT workflows using Apache Airflow and Apache NiFi to ensure seamless data ingestion, transformation, and quality checks. Collaborate with cross-functional teams to understand data quality requirements and translate them into technical solutions. Define and enforce data quality standards, rules, and monitoring processes. Perform root cause analysis on data quality issues and implement effective fixes and enhancements. Mentor and guide junior engineers, conducting code reviews and fostering best practices. Continuously evaluate and integrate new tools and technologies to enhance data quality capabilities. Ensure high code quality, performance, and reliability in all data processing pipelines. Create comprehensive documentation and reports on data quality metrics and system architecture. Required Skills & Experience Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field with Data Engineering Experience. 5+ years of professional experience in software development, with at least 2 years in a lead or senior engineering role. Strong proficiency in Python programming and experience building data processing applications. Hands-on expertise with Apache Spark and Hadoop for big data processing. Solid experience with workflow orchestration tools like Apache Airflow. Experience designing and managing data ingestion and integration pipelines with Apache NiFi. Understanding on Data Quality automation, CI/CD, Jenkins, Oracle, Power BI, Splunk Deep understanding of data quality concepts, data validation techniques, and distributed data systems. Strong problem-solving skills and ability to lead technical discussions. Experience with cloud platforms (AWS, GCP, or Azure) is a plus. Excellent communication and collaboration skills Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-251594
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In testing and quality assurance at PwC, you will focus on the process of evaluating a system or software application to identify any defects, errors, or gaps in its functionality. Working in this area, you will execute various test cases and scenarios to validate that the system meets the specified requirements and performs as expected. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. JD Template- ETL Tester Associate - Operate Field CAN be edited Field CANNOT be edited ____________________________________________________________________________ Job Summary - A career in our Managed Services team will provide you with an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Data, Testing & Analytics as a Service team brings a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Minimum Degree Required (BQ) *: Bachelor's degree Degree Preferred Required Field(s) of Study (BQ): Preferred Field(s) Of Study Computer and Information Science, Management Information Systems Minimum Year(s) of Experience (BQ) *: US Certification(s) Preferred Minimum of 2 years of experience Required Knowledge/Skills (BQ) Preferred Knowledge/Skills *: As an ETL Tester, you will be responsible for designing, developing, and executing SQL scripts to ensure the quality and functionality of our ETL processes. You will work closely with our development and data engineering teams to identify test requirements and drive the implementation of automated testing solutions. Key Responsibilities Collaborate with data engineers to understand ETL workflows and requirements. Perform data validation and testing to ensure data accuracy and integrity. Create and maintain test plans, test cases, and test data. Identify, document, and track defects, and work with development teams to resolve issues. Participate in design and code reviews to provide feedback on testability and quality. Develop and maintain automated test scripts using Python for ETL processes. Ensure compliance with industry standards and best practices in data testing. Qualifications Solid understanding of SQL and database concepts. Proven experience in ETL testing and automation. Strong proficiency in Python programming. Familiarity with ETL tools such as Apache NiFi, Talend, Informatica, or similar. Knowledge of data warehousing and data modeling concepts. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with version control systems like Git. Preferred Qualifications Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab. Knowledge of big data technologies such as Hadoop, Spark, or Kafka.
Posted 1 month ago
4.0 - 10.0 years
0 Lacs
India
On-site
We are a fast-moving team looking for an individual with a passion to Lead the transformation to DevOps and Continuous Delivery across the organization. Work directly with the Development, QA, Support using agile methodologies, to implement innovative, reliable and secure solutions in Cloud. Key Accountability Work together with the development teams to bring new products into the cloud. Design, implement and maintain AWS infrastructure including operation, security and compliance aspects. Create process to perform continuous deployment, including full orchestration of deployment process. Develop tools and scripts to improve efficiency of operational tasks. Create and provide best practices to the organization for DevOps, SecOps, CI/CD, and infrastructure. Implement monitoring processes and design/deploy monitoring dashboards. Build and support system automation, deployment, and continuous integration tools. Thoroughly demonstrated working knowledge of Software Development Life Cycle (SDLC) methodology (processes, and deliverables). Help to maintain and monitor production environments. Help to design and support internal development environments inside both AWS and Azure. Excellent verbal and written communication skills required Must be able to communicate effectively with other team members Basic Qualifications 4-10 years demonstrated experience. 1-5 years working with AWS. 1-5 years working in Linux environments. 1-5 years Scripting Languages with at Bash or Python. 1-5 years using Git SCM, GithHub. Preferred Qualifications Experience with Agile Methodology. Experience in Cloud Infrastructures and Deployment Models. Experience in understanding and execution of Cloud best practices with respect to Operations and Security. Experience with AWS Services and CLI. Container Infrastructure, ECS, EKS, Kubernetes, Docker Swarm. Experience with Apache Nifi, Nginx, MongoDB, RabbitMQ and/or ElasticSearch and Cluster Configurations. Jira, Confluence, Jenkins, TeamCity, Artifactory, DockerHub, Github. Experience Administering Databases such as Postgres, MariaDb, Mongo, MySQL, and or MSSQL. Log Aggregation experience using SumoLogic or similar tools.
Posted 1 month ago
20.0 years
0 Lacs
India
On-site
Impelsys Overview Impelsys is a market leader in providing cutting-edge digital transformation solutions & services – leveraging digital, AI, ML, cloud, analytics, and emerging technologies. We deliver custom solutions that meet customers’ technology needs wherever they are in their digital lifecycle. With 20+ years of experience, we have helped our clients to build, deploy & streamline their digital infrastructure by providing innovative solutions & value-driven services to transform their business to thrive in a digital economy. We offer expertise in providing Products & Platforms, Enterprise Technology Services, and Learning & Content Services to drive business success. Some of our marquee customers include Elsevier, Wolters Kluwer, Pearson, American Heart Association, Disney, World Bank, International Monetary Fund, BBC, Encyclopedia Britannica, McGraw-Hill, Royal College of Nursing, and Wiley. Our technology stack is varied and cutting edge. We have moved from monolithic applications to distributed architecture to now, microservices based architecture. Our platform runs on Java, LAMP and AngularJS. Our mobile apps are native apps as well as apps built using React, Xamarin and Ionic. Our bespoke development services' TRM includes AngularJS, jQuery, Bootstrap, Cordova, Kafka, PNGINX, Propel, MongoDB, MySQL, DynamoDB, and Docker among others. Impelsys is a Great Place to Work certified company & has a global footprint of 1,100+ employees, with its delivery centers in New York, USA, Amsterdam, Porto and Bangalore & Mangalore in India. Overview: Contribute to our client’s publishing ecosystem by supporting, configuring, and developing content management systems to sustain the publishing environment. Knowledge in XML technologies, XSLT, XQuery, XPath and related technologies and Schematron, Content management, and full-stack systems is essential to support the development process. Collaboration with a team of internal and external resources to configure application software and databases, writing, testing, and deploying code to support end users is critical. A key function of the role includes translating end user requirements to deliver efficient solutions that align with business objectives. Essential Job Functions and Responsibilities: The job functions include, but are not limited to, the following: Work effectively within a small team to maximize productivity and efficiency by coordinating seamlessly across global time zones, collaborating with both internal and external team members Provide technical support for relational and XML-based content systems to manipulate data and support business objectives Manage integrations between RSuite, Mark Logic, and MySQL databases Gather and interpret Voice of the Customer (VoC) feedback to ensure our systems align with customer needs and develop solutions to further support end users Write clear technical specifications and comprehensive documentation Proficiently develop XQuery and XSLT code to enhance system functionality Maintain and extend DTDs/schemas/schematron, XSD Streamline testing, code review, and deployment processes using automation technologies such as Postman and Jenkins Deploy and test code across development, staging, and production environments Ensure change requests are implemented accurately and on schedule while keeping customers advised of on-going development priorities Conduct in-depth analysis of requirements and enhancement requests from end users and align requirements with business objectives Find and correct XML database inconsistencies and design and implement solutions to reduce degradation of data Implement medium to large system improvements utilizing XQuery and XSLT code to reduce technical debt Administer the MarkLogic, MySQL, and RSuite application environment on both Windows and Linux servers Demonstrate ownership and an ability to solve complex problems by researching and implementing solutions Embrace a continuous improvement mindset by researching new technologies and recommending solutions that enhance the content management publishing workflow Ability to work independently and as part of a team. Knowledge of web services and APIs. Linux administration Qualifications and Education: Any combination equivalent to, but not limited to, the following: Three to five years of working with content management systems and publishing workflows. Solid understanding and minimum three years of experience working with XML, XQuery, and XSLT. Proficiency in metadata modeling within a content management system. Comfortable with Windows and Linux server administration. Exposure to any of the following technologies is a plus: MarkLogic, RSuite, Java, Docker, Nifi, JSON, Javascript and frontend technologies like Angular Comfortable using XML-based tools and editors, including Schematron, XForms, and oXygen. Knowledge of scripting languages, databases, as well as declarative and object-oriented programming. Experience with DevOps tools, specifically using Git, as well as automated deployment/testing methodologies such as Jenkins. Ability to engage with stakeholders and translate their requirements into technical solutions. Bachelor's degree or equivalent experience in Information Technology, Computer Sciences, or a related field. Language, Analytical Skills and Person Specifications Any combination equivalent to, but not limited to, the following: Effective communications skills, both oral and written, are required. Must be effective at understanding and communicating with an array of stakeholders: project management, programmers and tech staff, upper management, other [client name] staff, external contractors, vendors, clients, and customers. Excellent Leadership and Teamwork. Working effectively with internal and external team members at various levels to achieve results through cooperative, goal-oriented approach Problem-solving and Analytical skills. Must be able to effectively analyze and trouble shoot issues, work with others to overcome obstacles, and identify and quickly deploy solutions. Multitasking. Ability to manage multiple projects, switching quickly from task to task, as needed Results Focus and Accountability. Achieving results within project schedules and deadlines, setting challenging goals, prioritizing tasks, accepting accountability, and providing leadership.
Posted 1 month ago
6.0 years
12 - 15 Lacs
Bhopal
On-site
#Connections #hiring #Immediate #DataEngineer #Bhopal Hi Connections, We are hiring data engineer for our client. Job Title: Data Engineer – Real-Time Streaming & Integration (Apache Kafka) Location: Bhopal, Madhya Pradesh Key Responsibilities: · Design, develop, and maintain real-time streaming data pipelines using Apache Kafka and Kafka Connect. · Implement and optimize ETL/ELT processes for structured and semi-structured data from various sources. · Build and maintain scalable data ingestion, transformation, and enrichment frameworks across multiple environments. · Collaborate with data architects, analysts, and application teams to deliver integrated data solutions that meet business requirements. · Ensure high availability, fault tolerance, and performance tuning for streaming data infrastructure. · Monitor, troubleshoot, and enhance Kafka clusters, connectors, and consumer applications. · Enforce data governance, quality, and security standards throughout the pipeline lifecycle. · Automate workflows using orchestration tools and CI/CD pipelines for deployment and version control. Required Skills & Qualifications: · Strong hands-on experience with Apache Kafka, Kafka Connect, and Kafka Streams. · Expertise in designing real-time data pipelines and stream processing architectures. · Solid experience with ETL/ELT frameworks using tools like Apache NiFi, Talend, or custom Python/Scala-based solutions. · Proficiency in at least one programming language: Python, Java, or Scala. · Deep understanding of message serialization formats (e.g., Avro, Protobuf, JSON). · Strong SQL skills and experience working with data lakes, warehouses, or relational databases. · Familiarity with schema registry, data partitioning, and offset management in Kafka. · Experience with Linux environments, containerization, and CI/CD best practices. Preferred Qualifications: · Experience with cloud-native data platforms (e.g., AWS MSK, Azure Event Hubs, GCP Pub/Sub). · Exposure to stream processing engines like Apache Flink or Spark Structured Streaming. · Familiarity with data lake architectures, data mesh concepts, or real-time analytics platforms. · Knowledge of DevOps tools like Docker, Kubernetes, Git, and Jenkins. Work Experience: · 6+ years of experience in data engineering with a focus on streaming data and real-time integrations. · Proven track record of implementing data pipelines in production-grade enterprise environments. Education Requirements: · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. · Certifications in data engineering, Kafka, or cloud data platforms are a plus. Interested guys, kindly share your updated profile to pavani@sandvcapitals.com or reach us on 7995292089. Thank you. Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,500,000.00 per year Schedule: Day shift Experience: Data Engineer: 6 years (Required) ETL: 6 years (Required) Work Location: In person
Posted 1 month ago
0.0 - 6.0 years
12 - 15 Lacs
Bhopal, Madhya Pradesh
On-site
#Connections #hiring #Immediate #DataEngineer #Bhopal Hi Connections, We are hiring data engineer for our client. Job Title: Data Engineer – Real-Time Streaming & Integration (Apache Kafka) Location: Bhopal, Madhya Pradesh Key Responsibilities: · Design, develop, and maintain real-time streaming data pipelines using Apache Kafka and Kafka Connect. · Implement and optimize ETL/ELT processes for structured and semi-structured data from various sources. · Build and maintain scalable data ingestion, transformation, and enrichment frameworks across multiple environments. · Collaborate with data architects, analysts, and application teams to deliver integrated data solutions that meet business requirements. · Ensure high availability, fault tolerance, and performance tuning for streaming data infrastructure. · Monitor, troubleshoot, and enhance Kafka clusters, connectors, and consumer applications. · Enforce data governance, quality, and security standards throughout the pipeline lifecycle. · Automate workflows using orchestration tools and CI/CD pipelines for deployment and version control. Required Skills & Qualifications: · Strong hands-on experience with Apache Kafka, Kafka Connect, and Kafka Streams. · Expertise in designing real-time data pipelines and stream processing architectures. · Solid experience with ETL/ELT frameworks using tools like Apache NiFi, Talend, or custom Python/Scala-based solutions. · Proficiency in at least one programming language: Python, Java, or Scala. · Deep understanding of message serialization formats (e.g., Avro, Protobuf, JSON). · Strong SQL skills and experience working with data lakes, warehouses, or relational databases. · Familiarity with schema registry, data partitioning, and offset management in Kafka. · Experience with Linux environments, containerization, and CI/CD best practices. Preferred Qualifications: · Experience with cloud-native data platforms (e.g., AWS MSK, Azure Event Hubs, GCP Pub/Sub). · Exposure to stream processing engines like Apache Flink or Spark Structured Streaming. · Familiarity with data lake architectures, data mesh concepts, or real-time analytics platforms. · Knowledge of DevOps tools like Docker, Kubernetes, Git, and Jenkins. Work Experience: · 6+ years of experience in data engineering with a focus on streaming data and real-time integrations. · Proven track record of implementing data pipelines in production-grade enterprise environments. Education Requirements: · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. · Certifications in data engineering, Kafka, or cloud data platforms are a plus. Interested guys, kindly share your updated profile to pavani@sandvcapitals.com or reach us on 7995292089. Thank you. Job Type: Full-time Pay: ₹1,200,000.00 - ₹1,500,000.00 per year Schedule: Day shift Experience: Data Engineer: 6 years (Required) ETL: 6 years (Required) Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Role: Data Engineer Location: Indore(Hybrid) Experience required : 5+ Years Job Description: Build and maintain data pipelines for ingesting and processing structured and unstructured data. Ensure data accuracy and quality through validation checks and sanity reports. Improve data infrastructure by automating manual processes and scaling systems. Support internal teams (Product, Delivery, Onboarding) with data issues and solutions. Analyze data trends and provide insights to inform key business decisions. Collaborate with program managers to resolve data issues and maintain clear documentation. Must-Have Skills: Proficiency in SQL, Python (Pandas, NumPy), and R Experience with ETL tools (e.g., Apache NiFi, Talend, AWS Glue) Cloud experience with AWS (S3, Redshift, EMR, Athena, RDS) Strong understanding of data modeling, warehousing, and data validation Familiarity with data visualization tools (Tableau, Power BI, Looker) Experience with Apache Airflow, Kubernetes, Terraform, Docker Knowledge of data lake architectures, APIs, and custom data formats (JSON, XML, YAML)
Posted 1 month ago
10.0 years
0 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Do you believe that data provides game-changing insight? Do you see data as an asset that creates a competitive advantage? Great…so do we. Micron Technology operates in a highly competitive industry where innovation depends on talented minds extracting fresh insights from an ever-expanding data universe. As you already know, this can only happen when quality data solutions are available at the right time and in the right format. Our expert team of big data engineers is dedicated to making this happen. We operate in a diverse, collaborative environment where problem solving is a team sport and creative solutions are recognized and rewarded. Does this sound like the right team for you? Good news. We’re hiring! As a Data Engineering Manager at Micron Technology Inc., you will be a key member of our Technology Solutions group within the Smart Manufacturing and AI organization. The Data Engineering team works closely with Micron’s Front End Manufacturing and Planning Ops business area in all aspects of data, data engineering, Machine Learning, and advanced analytics solutions. We are looking for leaders with strong technical experience in Big Data and Cloud Data warehouse technologies. This role will work primarily in Cloud data warehouse like Snowflake and GCP platforms , monitoring solutions such as Splunk and automation and machine learning using Python. You will provide technical and people leadership for the team. You will ensure critical projects as well as higher level production support are delivered with high quality in collaboration with internal Micron team members. Job Description : Responsibilities and Tasks: Lead a team of Data Engineers Accountable for performance discussion for direct reports , E ngage team members and work with team members on their career development. Responsible for the development, coaching and performance management of those who report to you. Build, maintain , and support a positive work culture that promotes safety, security, and environmental programs. Succession planning Participate in design, architecture review and deployment of big data and cloud data warehouse solutions. Lead and drive project requirements and deliverable's Implement solutions that eliminate or minimize technical debt through a well-designed architecture, data model, and lifecycle. Collaborate with Key project stakeholders, I4 Solution analyst s on project needs and translate requirements into technical needs for the team of data engineers. Bring together and share best-practice knowledge among the data engineering community. Coach, mentor, and help develop data engineers. Guide and manage the team through operational issues, escalations and resolve business partner issues in a timely manner with strong collaboration and care for business priorities. Ability to learn and be conversational with multiple utilities and tools that help with Operations monitoring and alerting. Collaborate with business partners and other teams to ensure data solutions are available, recover from failures and operate healthy. Contribute to site level initiatives such as hiring, cross pillar leadership collaboration, resource management and engagement Qualifications and Experience: 10+ years' developing, delivering, and/or supporting big data engineering and advanced analytics solutions. 6 + years ' of experience in managing or leading data engineering teams 4 -5 years of hands-on experience building Cloud Data centric solutions in GCP or other cloud platforms Intermediate to Advanced level programing experience, preferably Python . Spark experience is a plus Proficient with ELT or ETL (preferably NiFi) techniques for complex data processing Proficient with various database management systems - preferably SQL Server, Snowflake Strong domain knowledge and understanding of Mfg Planning and Scheduling data Candidate should be strong in Data Structures, Data processing and implementing complex data integration s with application . Good to have knowledge on any visualization tool like Power BI, Tableau. Demonstrate ability to lead multi-functional groups, with diverse interests and requirements, to a common objective . Presentation skills with a high degree of comfort speaking with management and developers. A passion for data and information with strong analytical, problem solving, and organizational skills. The ability to work in a dynamic, fast-paced, work environment. Self-motivated with the ability to work under minimal supervision. Education: B.S. in Computer Science, Management Information Systems, or related fields About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. The Data Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. Here’s what you’ll bring Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. • Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure Data Factory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform (GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices. Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. What’s in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company About Us: Ideas2IT stands at the intersection of Technology, Business, and Product Engineering, offering high-caliber Product Development services. Initially conceived as a CTO consulting firm, we've evolved into thought leaders in cutting-edge technologies such as Generative AI, assisting our clients in embracing innovation. Our forte lies in applying technology to address business needs, demonstrated by our track record of developing AI-driven solutions for industry giants like Facebook, Bloomberg, Siemens, Roche, and others. Harnessing our product-centric approach, we've incubated several AI-based startups—including Pipecandy, Element5, IdeaRx, and Carefi. in—that have flourished into successful ventures backed by venture capital. With fourteen years of remarkable growth behind us, we're steadfast in pursuing ambitious objectives. P.S. We're all about diversity, and our doors are wide open to everyone. Join us in celebrating the awesomeness of differences!
Posted 1 month ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position: Dynamic 365 Sales Executive Location: Indore Employment Type: Full-Time Time Zone: UK Time Zone Responsibilities: Drive B2B sales for Dynamic 365 solutions Identify, qualify, and close leads across SMBs and enterprise clients Build strong relationships with IT decision-makers and procurement teams Work closely with technical teams to deliver tailored solution pitches Achieve monthly and quarterly sales targets Requirements: Proven track record in Dynamic 365 / SaaS solution sales Strong communication and consultative selling skills Experience with CRM tools and lead generation platforms Preferred: Prior experience working with Microsoft CSPs or resellers Understanding of Microsoft licensing models and deployment scenarios Company Profile: About Ksolves- Our Organization is running successfully from the past 12+ years and growing . Ksolves is CMMI Level 3 company. We are listed in NSE and BSE of India Ksolves is a Platinum Consulting Partner for Salesforce. Ksolves is an official Gold partner for Odoo. We have a team size of nearly 550+ Developers and Architects who are competent to take any challenging role. We have a wide presence to cater the best resources globally with offices in Noida, Indore and Pune in India and overseas offices are in USA and UAE We have a huge client base from US, UK and Europe . We have expertise in niche technologies like Apache Spark, Apache Cassandra, Apache NiFi, Salesforce, Machine Learning, Artificial Intelligence, Big Data, OpenShift, Microservices, Mobile and Web application Development, ROR, Penetration Testing, DevOps etc. We have expertise in domains like healthcare, education, banking etc but not limited to this only. We provide an environment for learning, motivation and growth so we have a good employee retention rate . To understand this our very f irst employee is still working with us . We have a pool of seniors having experience from top MNCs contributing to the growth of the company and happy with the opportunities we have provided to them. More than 30 employees have completed above 5 years with the company. We have work and time flexibility to ensure a good work life balance. Easily approachable and Supportive team members Quarterly Rewards and recognitions. Bi-weekly Employee Engagement Activities for Enjoyment and encouragement. We provide a chance to work on Different technologies Our vision is to grow continuously along with the growth of an employee.
Posted 1 month ago
0.0 - 4.0 years
8 - 12 Lacs
Chennai, Tamil Nadu
On-site
Senior ETL Developer Job Summary Experience : 5-8 years Hybrid mode Full time/Contract Chennai Immediate joiner US shift timings Job Overview We are looking for a Senior ETL Developer who can take ownership of projects end-to-end , lead technical implementation, and mentor team members in ETL, data integration, and cloud data workflows. The ideal candidate will have 5–8 years of experience working with Talend , PostgreSQL , and AWS , and must be comfortable in a Linux environment . We are seeking a Senior ETL Developer with strong expertise in Talend , PostgreSQL , AWS , and Linux . The candidate should be able to take complete ownership of project execution—from design to delivery— while mentoring junior developers and driving technical best practices. The ideal candidate will have hands-on experience in data integration , cloud-based ETL pipelines , data versioning , and automation , and must be ready to work in a hybrid setup from Chennai or Madurai . · Design and implement scalable ETL workflows using Talend and PostgreSQL. · Handle complex data transformations and integrations across structured/unstructured sources. · Develop automation scripts using Shell/Python in a Linux environment. · Build and maintain stable ETL pipelines integrated with AWS services (S3, Glue, RDS, Redshift). · Ensure data quality, governance, and version control using tools like Git and Quilt. · Troubleshoot data pipeline issues and optimize for performance. · Schedule and manage jobs using tools like Apache Airflow, Cron, or Jenkins. · Mentor team members, review code, and promote technical best practices. · Drive continuous improvement and training on modern data tools and techniques. ETL & Integration · Must Have : Talend (Open Studio / DI / Big Data) · Also Good : SSIS, SSRS, SAS · Bonus : Apache NiFi, Informatica Databases · Required : PostgreSQL (3+ years) · Bonus : Oracle, SQL Server, MySQL Cloud Platforms · Required : AWS (S3, Glue, RDS, Redshift) · Bonus : Azure Data Factory, GCP · Certifications : AWS / Azure (Good to have) OS & Scripting · Required : Linux, Shell scripting · Preferred : Python scripting Data Versioning & Source Control · Required : Quilt, Git/GitHub/Bitbucket · Bonus : DVC, LakeFS, Git LFS Scheduling & Automation · Apache Airflow, Cron, Jenkins, Talend JobServer Bonus Tools · REST APIs, JSON/XML, Spark, Hive, Hadoop Visualization & Reporting · Power BI / Tableau (Nice to have) · Strong verbal and written communication. · Proven leadership and mentoring capabilities. · Ability to manage projects independently. · Comfortable adopting and teaching new tools and methodologies. · Willingness to work in a hybrid setup from Chennai or Madurai. Job Types: Full-time, Contractual / Temporary Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Schedule: Evening shift Monday to Friday Rotational shift UK shift US shift Weekend availability Experience: ETL developer: 5 years (Required) Talend/Informatica : 4 years (Required) Location: Chennai, Tamil Nadu (Required) Shift availability: Day Shift (Preferred) Night Shift (Preferred) Overnight Shift (Preferred) Work Location: In person
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
General Skills & Experience: Minimum 10-18 yrs of Experience • Expertise in Spark (Scala/Python), Kafka, and cloud-native big data services (GCP, AWS, Azure) for ETL, batch, and stream processing. • Deep knowledge of cloud platforms (AWS, Azure, GCP), including certification (preferred). • Experience designing and managing advanced data warehousing and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse). • Proven experience with building, managing, and optimizing ETL/ELT pipelines and data workflows for large-scale systems. • Strong experience with data lakes, storage formats (Parquet, ORC, Delta, Iceberg), and data movement strategies (cloud and hybrid). • Advanced knowledge of data modeling, SQL development, data partitioning, optimization, and database administration. • Solid understanding and experience with Master Data Management (MDM) solutions and reference data frameworks. • Proficient in implementing Data Lineage, Data Cataloging, and Data Governance solutions (e.g., AWS Glue Data Catalog, Azure Purview). • Familiar with data privacy, data security, compliance regulations (GDPR, CCPA, HIPAA, etc.), and best practices for enterprise data protection. • Experience with data integration tools and technologies (e.g. AWS Glue, GCP Dataflow , Apache Nifi/Airflow, etc.). • Expertise in batch and real-time data processing architectures; familiarity with event-driven, microservices, and message-driven patterns. • Hands-on experience in Data Analytics, BI & visualization tools (PowerBI, Tableau, Looker, Qlik, etc.) and supporting complex reporting use-cases. • Demonstrated capability with data modernization projects: migrations from legacy/on-prem systems to cloud-native architectures. • Experience with data quality frameworks, monitoring, and observability (data validation, metrics, lineage, health checks). • Background in working with structured, semi-structured, unstructured, temporal, and time series data at large scale. • Familiarity with Data Science and ML pipeline integration (DevOps/MLOps, model monitoring, and deployment practices). • Experience defining and managing enterprise metadata strategies.
Posted 1 month ago
10.0 years
0 Lacs
India
Remote
Join phData, a dynamic and innovative leader in the modern data stack. We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions. We're committed to helping global enterprises overcome their toughest data challenges. phData is a remote-first global company with employees based in the United States, Latin America and India. We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust. Even though we're growing extremely fast, we maintain a casual, exciting work environment. We hire top performers and allow you the autonomy to deliver results. 5x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM Required Experience: 10+ years as a hands-on Solutions Architect and/or Data Engineer designing and implementing data solutions Team lead, and/or mentorship of other engineers Ability to develop end-to-end technical solutions into production — and to help ensure performance, security, scalability, and robust data integration. Programming expertise in Java, Python and/or Scala Core cloud data platforms including Snowflake, Spark, AWS, Azure, Databricks and GCP SQL and the ability to write, debug, and optimize SQL queries Client-facing written and verbal communication skills and experience Create and deliver detailed presentations Detailed solution documentation (e.g. including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc.) 4-year Bachelor's degree in Computer Science or a related field Prefer any of the following: Production experience in core data platforms: Snowflake, AWS, Azure, GCP, Hadoop, Databricks Cloud and Distributed Data Storage: S3, ADLS, HDFS, GCS, Kudu, ElasticSearch/Solr, Cassandra or other NoSQL storage systems Data integration technologies: Spark, Kafka, event/streaming, Streamsets, Matillion, Fivetran, NiFi, AWS Data Migration Services, Azure DataFactory, Informatica Intelligent Cloud Services (IICS), Google DataProc or other data integration technologies Multiple data sources (e.g. queues, relational databases, files, search, API) Complete software development lifecycle experience including design, documentation, implementation, testing, and deployment Automated data transformation and data curation: dbt, Spark, Spark streaming, automated pipelines Workflow Management and Orchestration: Airflow, AWS Managed Airflow, Luigi, NiFi Why phData? We Offer: Remote-First Workplace Medical Insurance for Self & Family Medical Insurance for Parents Term Life & Personal Accident Wellness Allowance Broadband Reimbursement Continuous learning and growth opportunities to enhance your skills and expertise Other benefits include paid certifications, professional development allowance, and bonuses for creating for company-approved content phData celebrates diversity and is committed to creating an inclusive environment for all employees. Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities. So, regardless of how your diversity expresses itself, you can find a home here at phData. We are proud to be an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics. If you would like to request an accommodation due to a disability, please contact us at People Operations.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France