Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
15 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Altimetrik is Hiring Azure Data Engineer with good experience in Python,Pyspark,SQL,Azure, Data Modelling. Location: Hyderabad,Bangalore,Chennai,Pune. Exp: 7 to 15 Yrs NP: Immediate to 1 week joiners If you are interested, please share your profile @ rmuppidi@altimetrik.com
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Responsibilities Understand problem statements and independently implement data science solutions and techniques. Collaborate with stakeholders to identify opportunities for leveraging data to drive business solutions. Quickly learn and adapt to new tools, platforms, or programming languages. Conceptualize, design, and deliver high-quality solutions and actionable insights. Conduct data gathering, requirements analysis, and research for solution development. Work closely with cross-functional teams, including Data Engineering and Product Development, to implement models and monitor their outcomes. Develop and deploy AI/ML-based solutions for problems such as: Customer segmentation and targeting Propensity modeling, exploratory data analysis (EDA) RFM analysis, mission segmentation, price optimization, promo optimization, customer lifetime value (CLTV) analysis, and more. Operate in an Agile development environment to ensure timely delivery of solutions. Must-Have Skills Required Skills and Experience Python SQL Power BI Strong problem-solving abilities with a focus on delivering measurable business outcomes. Good understanding of statistical concepts and techniques. Experience in the retail industry or a strong interest in solving retail business challenges. Good-to-Have Skills Familiarity with PySpark and Databricks. Knowledge of cloud infrastructure and architecture. Experience with tools like Click Up or similar project management platforms. Hands-on experience with other data visualization tools.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: Grow your career with an exciting opportunity with us, where you will be a part of creating software solutions that help to change lives - millions of lives. As a Data Engineer , you will have the opportunity to be a member of a focused team dedicated to helping to make the health care system work better for everyone. Here, you'll partner with some of the smartest people you've ever worked with to design solutions to meet a wide range of health consumer needs Role: Azure Data Engineer Location: Hyderabad Experience: 5 to 10 Years Job Type: Full Time Employment What You'll Do: Design and implement scalable ETL/ELT pipelines using Azure Data Factory. Develop and optimize big data solutions using Azure Databricks and PySpark. Write efficient and complex SQL queries for data extraction, transformation, and analysis. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering, including CI/CD, version control, and documentation. Expertise You'll Bring: 3+ years of experience in data engineering with a strong focus on Azure cloud technologies. Proficiency in Azure Data Factory, Azure Databricks, PySpark, and SQL. Experience with data modeling, data warehousing, and performance tuning. Familiarity with version control systems like Git and CI/CD pipelines. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Position: Grow your career with an exciting opportunity with us, where you will be a part of creating software solutions that help to change lives - millions of lives. As a Data Engineer , you will have the opportunity to be a member of a focused team dedicated to helping to make the health care system work better for everyone. Here, you'll partner with some of the smartest people you've ever worked with to design solutions to meet a wide range of health consumer needs Role: Azure Data Engineer Location: Hyderabad Experience: 6 to 12 Years Job Type: Full Time Employment What You'll Do: Design and implement scalable ETL/ELT pipelines using Azure Data Factory. Develop and optimize big data solutions using Azure Databricks and PySpark. Write efficient and complex SQL queries for data extraction, transformation, and analysis. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering, including CI/CD, version control, and documentation. Expertise You'll Bring: 3+ years of experience in data engineering with a strong focus on Azure cloud technologies. Proficiency in Azure Data Factory, Azure Databricks, PySpark, and SQL. Experience with data modeling, data warehousing, and performance tuning. Familiarity with version control systems like Git and CI/CD pipelines. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are seeking a talented Engineering Manager with ML Ops expertise to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. As an Engineering Manager, you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing? You will lead the charge in ensuring operational efficiency and delivering high-value solutions . You’ll mentor and develop a high-performing team of Big Data and MLOps engineers, driving best practices in software development, data management, and model deployment. With a focus on robust technical design, you’ll ensure solutions are secure, scalable, and efficient. Your role will involve hands-on development to tackle complex challenges, collaborating across teams to define requirements, and delivering innovative solutions. You’ll keep stakeholders and senior management informed on progress, risks, and opportunities while staying ahead of advancements in AI/ML technologies and driving their application. With an agile mindset, you will overcome challenges and deliver impactful solutions that make a difference. Technical Expertise Proven experience in microservices architecture, with hands-on knowledge of Docker and Kubernetes for orchestration. Proficiency in ML Ops and Machine Learning workflows using tools like Spark. Strong command of SQL and PySpark programming. Expertise in Big Data solutions such as Spark and Hive, with advanced Spark optimizations and tuning skills. Hands-on experience with Big Data orchestrators like Airflow. Proficiency in Python programming, particularly with frameworks like FastAPI or equivalent API development tools. Experience in unit testing, code quality assurance, and the use of Git or other version control systems. Cloud And Infrastructure Practical knowledge of cloud-based data stores, such as Redshift and BigQuery (preferred). Experience in cloud solution architecture, especially with GCP and Azure. Familiarity with GitLab CI/CD pipelines is a bonus. Monitoring And Scalability Solid understanding of logging, monitoring, and alerting systems for production-level big data pipelines. Prior experience with scalable architectures and distributed processing frameworks. Soft Skills And Additional Plus Points A collaborative approach to working within cross-functional teams. Ability to troubleshoot complex systems and provide innovative solutions. Familiarity with GitLab for CI/CD and infrastructure automation tools is an added advantage. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Principal Engineer, Python and PySpark This is an exciting and challenging opportunity to work in a collaborative, agile and forward thinking team environment With your software development background, you’ll be delivering software components to enable the delivery of platforms, applications and services for the bank As well as developing your technical talents, you'll have the opportunity to build project and leadership skills which will open up a range of exciting career options We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be driving development software and tools to accomplish project and departmental objectives by converting functional and non-functional requirements into suitable designs. You’ll play a leading role in planning, developing and deploying high performance robust and resilient systems for the bank, and will develop your leadership skills as you manage the technical delivery of one or more software engineering teams. You’ll also gain a distinguished leadership status in the software engineering community as you lead the wider participation in internal and industry wide events, conferences and other activities. You’ll Also Be Designing and developing high performance and high availability applications, using proven frameworks and technologies Making sure that the bank’s systems follow excellent architectural and engineering principles, and are fit for purpose Monitoring the technical progress against plans while safeguarding functionality, scalability and performance, and providing progress updates to stakeholders Designing and developing reusable libraries and APIs for use across the bank Writing unit and integration tests within automated test environments to ensure code quality The skills you'll need You’ll come with a background in software engineering, software or database design and architecture, as well as significant experience developing software within an SOA or microservices paradigm. You'll need at least twelve years of experience working with Python, PySpark and AWS. You’ll Also Need Experience of leading software development teams, introducing and executing technical strategies Knowledge of using industry recognised frameworks and development tooling Experience of test-driven development and using automated test frameworks, mocking and stubbing and unit testing tools A background in designing or implementing APIs Experience of supporting, modifying and maintaining systems and code developed by teams other than your own
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Details: Job Description Stefanini Group is a multinational company with a global presence in 41 countries and 44 languages, specializing in technological solutions. We believe in digital innovation and agility to transform businesses for a better future. Our diverse portfolio includes consulting, marketing, mobility, AI services, service desk, field service, and outsourcing solutions. Job Requirements Details: Role : Data Scientist Exp : 6 - 9 yrs Location : Pune only Interview - 2 rounds Mandatory Skills: Experience in Deep learning engineering (mostly on MLOps) Strong NLP/LLM experience and processing text using LLM Proficient in Pyspark/Databricks & Python programming. Building backend applications (data processing etc) using Python and Deep learning frame works. Deploying models and building APIS (FAST API, FLASK API) Need to have experience working with GPU'S. Working knowledge of Vector databases like 1) Milvus 2) azure cognitive search 3) quadrant etc Experience in transformers and working with hugging face models like llama, Mixtral AI and embedding models etc. Good To Have: Knowledge and experience in Kubernetes, docker etc Cloud Experience working with VM'S and azure storage. Sound data engineering experience. Pune: Hybrid Shift: 1 Pm to 10 PM
Posted 1 week ago
3.0 - 8.0 years
6 - 15 Lacs
Ahmedabad
Work from Office
Job Description: As an ETL Developer, you will be responsible for designing, building, and maintaining ETL pipelines using MSBI stack, Azure Data Factory (ADF) and Fabric. You will work closely with data engineers, analysts, and other stakeholders to ensure data is accessible, reliable, and processed efficiently. Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 1 week ago
65.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job descriptions may display in multiple languages based on your language selection. What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities Magna New Mobility is seeking a Data Engineer to join our Software Platform team. As a Backend Developer with cloud experience, you will be responsible for designing, developing, and maintaining the server-side components of our applications. You will work closely with cross-functional teams to ensure our systems are scalable, reliable, and secure. Your expertise in cloud platforms will be crucial in optimizing our infrastructure and deploying solutions that leverage cloud-native features. Your Responsibilities Design & Development: Develop robust, scalable, and high-performance backend systems and APIs. Design and implement server-side logic and integrate with front-end components. Database Knowledge: Strong experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases, especially MongoDB. Proficient in SQL and handling medium to large-scale datasets using big data platforms like Databricks. Familiarity with Change Data Capture (CDC) concepts, and hands-on experience with modern data streaming and integration tools such as Debezium and Apache Kafka. Cloud Integration: Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to deploy, manage, and scale applications. Implement cloud-based solutions for storage, computing, and networking. Security: Implement and maintain security best practices, including authentication, authorization, and data protection. Performance Optimization: Identify and resolve performance bottlenecks. Monitor application performance and implement improvements as needed. Collaboration: Work with product managers, front-end developers, and other stakeholders to understand requirements and deliver solutions. Participate in code reviews and contribute to team knowledge sharing. Troubleshooting: Diagnose and resolve issues related to backend systems and cloud infrastructure. Provide support for production environments and ensure high availability Who We Are Looking For Bachelor's Degree or Equivalent Experience in Computer Science or a relevant technical field Experience with Microservices: Knowledge and experience with microservices architecture. 3+ years of experience in backend development with a strong focus on cloud technologies. Technical Skills: Proficiency in backend programming languages such as Go lang, Python, Node.js, C/C++ or Java. Experience with any cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., EC2, Lambda, S3, CloudFormation). Experience in building scalable ETL pipelines on industry standard ETL orchestration tools (Airflow, Dagster, Luigi, Google Cloud Composer, etc.) with deep expertise in SQL, PySpark, or Scala. Database Knowledge: Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Expertise in SQL and using big data technologies (e.g. Hive, Presto, Spark, Iceberg, Flink, Databricks etc) on medium to large-scale data. DevOps: Familiarity with CI/CD pipelines, infrastructure as code (IaC), containerization (Docker), and orchestration tools (Kubernetes). Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate
Posted 1 week ago
6.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurgaon
On-site
At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are—with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Responsibilities Develop and expand our core data platform in MS Azure, Fabric building robust data infrastructure and scalable solutions. Enhance datasets and transformation toolsets on the MS Azure platform, leveraging distributed processing frameworks to modernize processes and expand the codebase Design and maintain ETL pipelines to ensure data is transformed, cleaned, and standardized for business use. Collaborate with cross-functional teams to deliver high-quality data solutions, contributing to both UI and backend development while translating UX/UI designs into functional interfaces. Develop scripts for building, deploying, and maintaining data systems, while utilizing tools for data exploration, analysis, and visualization throughout the project lifecycle. Utilize SQL and NoSQL databases for effective data management and support Agile practices with tools like Jira and GitHub. Contribute to technology standards and best practices in data warehousing and modeling, ensuring alignment with overall data strategy. Lead and motivate teams through periods of change, fostering a collaborative and innovative work environment. Skills and Competencies 3-6 years of cloud-based data engineering experience, with expertise in Microsoft Azure and other cloud platforms. Proficient in SQL and experienced with NoSQL databases, message queues, and streaming platforms like Kafka. Strong knowledge of Python and big data processing using PySpark, along with experience in CI/CD pipelines (Jenkins, GitHub, Terraform). Familiar with machine learning libraries such as TensorFlow and Keras, and skilled in data visualization tools like Power BI/Fabric and Matplotlib. Expertise in data wrangling, including cleaning, preprocessing, and transformation, with a solid foundation in statistics and probability. Excellent communication skills for engaging with technical and non-technical audiences across all organizational levels. Experience in UI development, translating UX/UI designs into code, data warehousing concepts, API development and integration, and workflow orchestration tools is desired. Education Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics, or a related field Relevant certifications in data science and machine learning are a plus About the team Our Technology Services Group (TSG) Team is responsible for delivering Innovative, data driven tech solutions. We build solutions that power analytics, enable machine learning, and provide critical insights across the organization. By joining our team, you will be part of exciting work in building scalable, next-generation data solutions that directly impact business strategy. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.
Posted 1 week ago
4.0 years
8 - 10 Lacs
Gurgaon
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description As an airline, safety is our most important principle. And our Corporate Safety team is responsible for making sure safety is top of mind in every action we take. From conducting flight safety investigations and educating pilots on potential safety threats to implementing medical programs and helping prevent employee injuries, our team is instrumental in running a safe and successful airline for our customers and employees. Job overview and responsibilities Corporate safety is integral for ensuring a safe workplace for our employees and travel experience for our customers. This role is responsible for supporting the development and implementation of a cohesive safety data strategy and supporting the Director of Safety Management Systems (SMS) in growing United’s Corporate Safety Predictive Analytics capabilities. This Senior Analyst will serve as a subject matter expert for corporate safety data analytics and predictive insight strategy and execution. This position will be responsible for supporting new efforts to deliver insightful data analysis and build new key metrics for use by the entire United Safety organization, with the goal of enabling data driven decision making and understanding for corporate safety. The Senior Analyst will be responsible for becoming the subject matter expert in several corporate safety specific data streams and leveraging this expertise to deliver insights which are actionable and allow for a predictive approach to safety risk mitigation. Develop and implement predictive/prescriptive data analytics workflows for Safety Data Management and streamlining processes Collaborate with Digital Technology and United Operational teams to analyze, predict and reduce safety risks and provide measurable solutions Partner with Digital Technology team to develop streamlined and comprehensive data analytics workstreams Support United’s Safety Management System (SMS) with predictive data analytics by designing and developing statistical models Manage and maintain the project portfolio of SMS data team Areas of focus will include, but are not limited to: Predictive and prescriptive analytics Train and validate models Creation and maintenance of standardized corporate safety performance metrics Design and implementation of new data pipelines Delivery of prescriptive analysis insights to internal stakeholders Design and maintain new and existing corporate safety data pipelines and analytical workflows Create and manage new methods for data analysis which provide prescriptive and predictive insights on corporate safety data Partner with US and India based internal partners to establish new data analysis workflows and provide analytical support to corporate and divisional work groups Collaborate with corporate and divisional safety partners to ensure standardization and consistency between all safety analytics efforts enterprise wide Provide support and ongoing subject matter expertise regarding a set of high priority corporate safety datasets and ongoing analytics efforts on those datasets Provide tracking and status update reporting on ongoing assignments, projects, and efforts to US and India based leaders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree Bachelor's degree in computer science, data science, information sytems, engineering, or another quantitative field (i.e. mathematics, statistics, economics, etc.) 4+ years experience in data analytics, predictive modeling, or statistics Expert level SQL skills Experience with Microsoft SQL Server Management Studio and hands-on experience working with massive data sets Proficiency writing complex code using both traditional and modern technologies/languages (i.e. Python, HTML, Javascript, Power Automate, Spark Node, etc.) for queries, procedures, and analytic processing to create useable data insight Ability to study/understand business needs, then design a data/technology solution that connects business processes with quantifiable outcomes Strong project management and communication skills 3-4 years working with complex data (data analytics, information science, data visualization or other relevant quantitative field Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree ML / AI experience Experience with PySpark, Apache, or Hadoop to deal with massive data sets
Posted 1 week ago
0 years
0 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Associate Managing Consultant, Strategy & Transformation Overview: Associate Managing Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Manage deliverable development and workstreams on projects across a range of industries and problem statements Contribute to and/or develop analytics strategies and programs for large, regional, and global clients by leveraging data and technology solutions to unlock client value Manage working relationship with client managers, and act as trusted and reliable partner Create predictive models using segmentation and regression techniques to drive profits Review analytics end-products to ensure accuracy, quality and timeliness. Proactively seek new knowledge and structures project work to facilitate the capture of Intellectual Capital with minimal oversight Team Collaboration & Culture Develop sound business recommendations and deliver effective client presentations Plan, organize, and structure own work and that of junior project delivery consultants to identify effective analysis structures to address client problems and synthesize analyses into relevant findings Lead team and external meetings, and lead or co-lead project management Contribute to the firm's intellectual capital and solution development Grow from coaching to enable ownership of day-to-day project management across client projects, and mentor junior consultants Develop effective working relationships with local and global teams including business partners Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Knowledge of metrics, measurements, and benchmarking to complex and demanding solutions across multiple industry verticals Data and analytics experience such as working with data analytics software (e.g., Python, R, SQL, SAS) and building, managing, and maintaining database structures Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred qualifications Additional data and analytics experience working with Hadoop framework and coding using Impala, Hive, or PySpark or working with data visualization tools (e.g., Tableau, Power BI) Experience managing tasks or workstreams in a collaborative team environment Experience coaching junior delivery consultants Relevant industry expertise MBA or master’s degree with relevant specialization (not required) Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 1 week ago
6.0 years
7 - 8 Lacs
Hyderābād
On-site
Full-time Employee Status: Regular Role Type: Hybrid Department: Analytics Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Senior Data Engineer is responsible for design, develop and support ETL data pipelines solutions primary in AWS environment Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Work as part of a team to build out and support data warehouse, implement solutions using PySpark to process structured and unstructured data. Play key role in building out a semantic layer through development of ETLs and virtualized views. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Support existing ETL processes written in SQL, or leveraging third party APIs with Python, troubleshoot and resolve production issues. Strong SQL and data to understand and troubleshoot existing complex SQL. Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for orchestration of data pipelines Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Troubleshoot and resolve data, system, and performance issues Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Qualifications you have completed graduation from BE/Btech 6 to 9 years of experience in Data Engineering development 5 years of experience in Python scripting You should have 8 years experience in SQL, 5+years in Datawarehouse, 5yrs in Agile and 3yrs with Cloud 3 years of experience with AWS ecosystem (Redshift, EMR, S3, MWAA) 5 years of experience in Agile development methodology You will work with the team to create solutions Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. #LI-Onsite Benefits Experian care for employee's work life balance, health, safety and wellbeing. 1) In support of this endeavor, we offer the best family well-being benefits, 2) Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together
Posted 1 week ago
9.0 years
15 Lacs
India
On-site
Experience- 9+ years Location: Pune, Hyderabad (Preferred) JD- Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing – XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), Job Types: Full-time, Permanent Pay: From ₹1,500,000.00 per year Schedule: Fixed shift Application Question(s): How many years of total experience do you currently have? How many years of experience do you have with Azure data services? How many years of experience do you have with Azure Databricks? How many years of experience do you have with PySpark? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? Are you comfortable attending L2 interview face to face in Hyderabad or Pune office? What is your current and preferred location?
Posted 1 week ago
4.0 - 8.0 years
12 - 16 Lacs
Pune
Work from Office
Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.
Posted 1 week ago
5.0 - 8.0 years
12 - 18 Lacs
Pune
Work from Office
Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience more than 5 years with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.
Posted 1 week ago
170.0 years
0 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 27524 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Azure Databricks Developer Description: Area(s) of responsibility About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Azure Data Engineer with Databricks (7+ Years) Experience: 7+ Years Job Description: Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing – XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.),
Posted 1 week ago
5.0 years
6 - 9 Lacs
Hyderābād
Remote
Overview: Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities: Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques – enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications: Bachelor’s degree in computer science, MIS, Business Management, or related field 5+ years’ experience in Information Technology 4+ years’ experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. About profile – Smart Manufacturing and AI (Data Science Engineer) Micron Technology’s vision is to transform how the world uses information to enrich life and our commitment to people, innovation, tenacity, collaboration, and customer focus allows us to fulfill our mission to be a global leader in memory and storage solutions. This means conducting business with integrity, accountability, and professionalism while supporting our global community. Describe the function of the role and how it fits into your department? As a Data Science Engineer at Micron Technology Inc., you will be a key member of a multi-functional team responsible for developing and growing Micron’s methods and systems for applied data analysis, modeling and reporting. You will be collaborating with other data scientists, engineers, technicians and data mining teams to design and implement systems to transform and process data extracted from Micron’s business systems, applying advanced statistical and mathematical methods to analyze the data, creating diagnostic and predictive models, and creating dynamic presentation layers for use by high-level engineers and managers throughout the company. You will be creating new solutions, as well as, supporting, configuring, and improving existing solutions. Why would a candidate love to work for your group and team? We are a Smart Manufacturing and AI organization with a goal to spearhead Industry 4.0 transformation and enable accelerated intelligence and digital operations in the company. Our teams deal with projects to help solve complex real-time business problems that would significantly help improve yield, cycle time, quality and reduce cost of our products. This role also gives a great opportunity to closely work with data scientists, I4.0 analysts and engineers and with the latest big data and cloud-based platforms/skillsets. We highly welcome new ideas and are large proponent of Innovation. What are your expectations for the position? We are seeking Data Science Engineers who are highly passionate about data and associated analysis techniques, can quickly adapt to learning new skills and can design/implement state-of-art Data Science and ML pipelines on-prem and on cloud. You will interact with experienced Data Scientists, Data Engineers, Business Areas Engineers, and UX teams to identify questions and issues for Data Science, AI and Advanced analysis projects and improvement of existing tools. In this position, you will help develop software programs, algorithms and/or automated processes to transform and process data from multiple sources, to apply statistical and ML techniques to analyze data, to discover underlying patterns or improve prediction capabilities, and to deploy advanced visualizations on modern UI platforms. There will be significant opportunities to perform exploratory and new solution development activities Roles & responsibilities can include but are not limited to: Broad knowledge and experience in: Strong desire to grow career as Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Ability to apply latest mathematical and statistical techniques to analyze data and uncover patterns. Interested to build web application as part of job scope. Knowledge in Cloud based Analytics and Machine Learning Modeling Knowledge in building APIs for application integration. Knowledge in the areas: statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. Data Analysis and Validation skills Strong software development skills. Above average skills in: Programming Fluency in Python Knowledge in statistics, Machine learning and other advanced analytical methods Knowledge in javascript, AngularJS 2.0, Tableau will be added advantage. Knowledge in OOPS background is added advantage. Understanding of pySpark and/or libraries for distributed and parallel processing is added advantage. Knowledge in Tensorflow, and/or other statistical software including scripting capability for automating analyses Knowledge with time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Understanding of Manufacturing Execution Systems (MES) is a plus Demonstrated ability to: Work in a dynamic, fast-paced, work environment Self-motivated with the ability to work under minimal direction To adapt to new technologies and learn quickly A passion for data and information with strong analytical, problem solving, and organizational skills Work in multi-functional groups, with diverse interests and requirements, to a common objective Communicate very well with distributed teams (written, verbal and presentation) Education: Bachelor’s or Master’s Degree in Computer Science,Mathematics, Computer Science, Data Science and Physics. CGPA requirements = 7.0 CGPA & Above About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will be: Provide support across the end-to-end delivery and run lifecycle, utilising your skills and expertise to carry out software development, testing and operational support activities with the ability to move between these according to demand End to end accountability for a module or part of a product or service, identifying and developing the most appropriate Technology solutions to meet customer needs as part of the Customer Journey Liaise with other engineers, architects, and business stakeholders to understand and drive the product or service’s direction Establish a digital environment and automate processes to minimize variation and ensure predictable high quality code and data Create technical test plans and records, including unit and integration tests, within automated test environments to ensure code quality Provide support to DevOps teams working at all stages of a product or service release/change with a strong customer focus and end to end journeys, ensuring they have an excellent domain knowledge. Working with Ops, Dev and Test Engineers to ensure operational issues (performance, operator intervention, alerting, design defect related issues, etc.) are identified and addressed at all stages of a product or service release / change Provide support in identification and resolution of all incidents associated with the IT service to proactively handle production support activities and areas of improvements. Ensure service resilience, service sustainability and recovery time objectives are met for all the software solutions delivered Responsible for support and maintenance to continuous integration / continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in handling distributed / big data projects. Requires proficiency in Pyspark, Linux scripting, SQL and Bigdata tools. Technology stack – Pyspark, ETL, Unix Shell Scripting, Python, Spark, SQL, Big data tools – Hadoop, Hive, DevOps tools Strong exposure in interpretation of business requirements from a technical perspective. Design, develop and implement IT solutions that fulfill business users' requirements and conform to a high level of quality standard. Experience with cloud platform. Sound problem-solving skills and attention to detail. Strong communication, presentation and team collaboration skills. Knowledge of Automation and DevOps practices and tools like Docker, Kubernetes, Jenkins, Ansible, G3, Nexus, Git, test automation Familiarity with agile development methodologies using Jira You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
7.0 years
20 Lacs
India
On-site
Job Description : EXP : 7 Years Location : Hyderabad We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Work Location: In person
Posted 1 week ago
4.0 - 7.0 years
18 - 20 Lacs
Pune
Hybrid
Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com
Posted 1 week ago
0 years
9 - 9 Lacs
Chennai
On-site
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job description Join us as a PySpark And Big Data Developer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need a background in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You'll need at least six years of experience in PySpark, SQL, Snowflake and Big Data. You'll also need experience in JIRA, Confluence and REST API Call. Experience working with AWS in Financial domain is desired. You’ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps and Agile methodology and associated toolset Developing Unit Test Cases and executing them Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai
On-site
Job Description: About us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview Global Markets Technology & Operations provides end-to-end technology solutions for Markets business including Equity, Prime Brokerage, Interest Rates, Currencies, Commodities, Derivatives and Structured Products. Across all these products, solutions include architecture, design, development, change management, implementation and support using various enterprise technologies. In addition, GMT&O provides Sales, Electronic Trading, Trade Workflow, Pricing, and Market Risk, Middle office, Collateral Management, Credit Risk, Post-trade confirmation, Settlement and Client service processes for Trading, Capital Markets, and Wealth Management businesses. ERTF – CFO is responsible for the technology solutions and platforms that support Chief Financial Officer (CFO) Group, including Global Financial Control, Corporate Treasury, Financial Forecasting, Enterprise Cost Management, Investor Relations, and Line of Business Finance functions (BFO). Increased demand for integrated and streamlined Business Finance management solutions has resulted in a few initiatives. The initiatives span across Subledger Simplification. AML Detection Channel Platform(ADCP) application is an AML monitoring tool used by GFCC to identify any suspicious activity, like - Money laundering & Fraud which requires compliance review. AML Alert Reconciliation Process (ARL) is one other application used by GFCC which receives alerts from various Detection Channels (AML, Fraud, etc), removes alert noise, enriches alerts, obtains attributes, decides (rule based) if the alert meets criteria and transforms the alert into an event. Job Description: This job is responsible for developing and delivering complex requirements to accomplish business goals. Key responsibilities of the job include ensuring that software is developed to meet functional, non-functional and compliance requirements, and solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Job expectations include a strong knowledge of development and testing practices common to the industry and design and architectural patterns. Responsibilities: Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements Designs, develops, and modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained Mentors other software engineers and coach team on Continuous Integration and Continuous Development (CI-CD) practices and automating tool stack Executes story refinement, definition of requirements, and estimating work necessary to realize a story through the delivery lifecycle Performs spike/proof of concept as necessary to mitigate risk or implement new ideas Automates manual release activities Designs, develops, and maintains automated test suites (integration, regression, performance) Manager of Process & Data: Demonstrates and expects process knowledge, data driven decisions, simplicity and continuous improvement. Enterprise Advocate & Communicator: Delivers clear and concise messages that motivate, convey the “why” and connects contributions to business results. Risk Manager: Leads and encourages the identification, escalation and resolution of potential risks. People Manager & Coach: Knows and develops team members through coaching and feedback. Financial Steward: Manages expenses and demonstrates an owner’s mindset. Enterprise Talent Leader: Recruits, on-boards and develops talent, and supports talent mobility for career growth. Driver of Business Outcomes: Delivers results through effective team management, structure, and routines. Requirements Education- BE/ BTECH/ ME/ MTECH Certifications if any- NA Experience range- 4- 6 years Foundational Skills Ability to work in multi technology project PYSPARK, Teradata, SQL, Unix Shell scripting Excellent oral/written communication skills / Project Management Skills Desired Skills Organized and able to multi-task in a fast-paced environment Highly motivated, able to work independently, self-starter; and problem/solving/analytical Excellent interpersonal skills; positive attitude; team player; flexible Willingness to learn and adapt to changes Shift Timings- 11:00 am to 8:00 pm Location- Chennai
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France