Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Details: Job Description Stefanini Group is a multinational company with a global presence in 41 countries and 44 languages, specializing in technological solutions. We believe in digital innovation and agility to transform businesses for a better future. Our diverse portfolio includes consulting, marketing, mobility, AI services, service desk, field service, and outsourcing solutions. Job Requirements Details: Role : Data Scientist Exp : 6 - 9 yrs Location : Pune only Interview - 2 rounds Mandatory Skills: Experience in Deep learning engineering (mostly on MLOps) Strong NLP/LLM experience and processing text using LLM Proficient in Pyspark/Databricks & Python programming. Building backend applications (data processing etc) using Python and Deep learning frame works. Deploying models and building APIS (FAST API, FLASK API) Need to have experience working with GPU'S. Working knowledge of Vector databases like 1) Milvus 2) azure cognitive search 3) quadrant etc Experience in transformers and working with hugging face models like llama, Mixtral AI and embedding models etc. Good To Have: Knowledge and experience in Kubernetes, docker etc Cloud Experience working with VM'S and azure storage. Sound data engineering experience. Pune: Hybrid Shift: 1 Pm to 10 PM
Posted 2 weeks ago
3.0 - 8.0 years
6 - 15 Lacs
Ahmedabad
Work from Office
Job Description: As an ETL Developer, you will be responsible for designing, building, and maintaining ETL pipelines using MSBI stack, Azure Data Factory (ADF) and Fabric. You will work closely with data engineers, analysts, and other stakeholders to ensure data is accessible, reliable, and processed efficiently. Key Responsibilities: Design, develop, and deploy ETL pipelines using ADF and Fabric. Collaborate with data engineers and analysts to understand data requirements and translate them into efficient ETL processes. Optimize data pipelines for performance, scalability, and robustness. Integrate data from various sources, including S3, relational databases, and APIs. Implement data validation and error handling mechanisms to ensure data quality. Monitor and troubleshoot ETL jobs to ensure data accuracy and pipeline reliability. Maintain and update existing data pipelines as data sources and requirements evolve. Document ETL processes, data models, and pipeline configurations. Qualifications: Experience: 3+ years of experience in ETL development, with a focus on ADF, MSBI stack, SQL, Power BI, Fabric. Technical Skills: Strong expertise in ADF, MSBI stack, SQL, Power BI. Proficiency in programming languages such as Python or Scala. Hands-on experience with ADF, Fabric, Power BI, MSBI. Solid understanding of data warehousing concepts, data modeling, and ETL best practices. Familiarity with orchestration tools like Apache Airflow is a plus. Data Integration: Experience with integrating data from diverse sources, including relational databases, APIs, and flat files. Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex ETL issues. Communication: Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Education: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience. Nice to Have: Experience with data lakes and big data processing. Knowledge of data governance and security practices in a cloud environment.
Posted 2 weeks ago
65.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job descriptions may display in multiple languages based on your language selection. What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities Magna New Mobility is seeking a Data Engineer to join our Software Platform team. As a Backend Developer with cloud experience, you will be responsible for designing, developing, and maintaining the server-side components of our applications. You will work closely with cross-functional teams to ensure our systems are scalable, reliable, and secure. Your expertise in cloud platforms will be crucial in optimizing our infrastructure and deploying solutions that leverage cloud-native features. Your Responsibilities Design & Development: Develop robust, scalable, and high-performance backend systems and APIs. Design and implement server-side logic and integrate with front-end components. Database Knowledge: Strong experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases, especially MongoDB. Proficient in SQL and handling medium to large-scale datasets using big data platforms like Databricks. Familiarity with Change Data Capture (CDC) concepts, and hands-on experience with modern data streaming and integration tools such as Debezium and Apache Kafka. Cloud Integration: Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to deploy, manage, and scale applications. Implement cloud-based solutions for storage, computing, and networking. Security: Implement and maintain security best practices, including authentication, authorization, and data protection. Performance Optimization: Identify and resolve performance bottlenecks. Monitor application performance and implement improvements as needed. Collaboration: Work with product managers, front-end developers, and other stakeholders to understand requirements and deliver solutions. Participate in code reviews and contribute to team knowledge sharing. Troubleshooting: Diagnose and resolve issues related to backend systems and cloud infrastructure. Provide support for production environments and ensure high availability Who We Are Looking For Bachelor's Degree or Equivalent Experience in Computer Science or a relevant technical field Experience with Microservices: Knowledge and experience with microservices architecture. 3+ years of experience in backend development with a strong focus on cloud technologies. Technical Skills: Proficiency in backend programming languages such as Go lang, Python, Node.js, C/C++ or Java. Experience with any cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., EC2, Lambda, S3, CloudFormation). Experience in building scalable ETL pipelines on industry standard ETL orchestration tools (Airflow, Dagster, Luigi, Google Cloud Composer, etc.) with deep expertise in SQL, PySpark, or Scala. Database Knowledge: Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Expertise in SQL and using big data technologies (e.g. Hive, Presto, Spark, Iceberg, Flink, Databricks etc) on medium to large-scale data. DevOps: Familiarity with CI/CD pipelines, infrastructure as code (IaC), containerization (Docker), and orchestration tools (Kubernetes). Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate
Posted 2 weeks ago
6.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 2 weeks ago
3.0 - 6.0 years
0 Lacs
Gurgaon
On-site
At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are—with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Responsibilities Develop and expand our core data platform in MS Azure, Fabric building robust data infrastructure and scalable solutions. Enhance datasets and transformation toolsets on the MS Azure platform, leveraging distributed processing frameworks to modernize processes and expand the codebase Design and maintain ETL pipelines to ensure data is transformed, cleaned, and standardized for business use. Collaborate with cross-functional teams to deliver high-quality data solutions, contributing to both UI and backend development while translating UX/UI designs into functional interfaces. Develop scripts for building, deploying, and maintaining data systems, while utilizing tools for data exploration, analysis, and visualization throughout the project lifecycle. Utilize SQL and NoSQL databases for effective data management and support Agile practices with tools like Jira and GitHub. Contribute to technology standards and best practices in data warehousing and modeling, ensuring alignment with overall data strategy. Lead and motivate teams through periods of change, fostering a collaborative and innovative work environment. Skills and Competencies 3-6 years of cloud-based data engineering experience, with expertise in Microsoft Azure and other cloud platforms. Proficient in SQL and experienced with NoSQL databases, message queues, and streaming platforms like Kafka. Strong knowledge of Python and big data processing using PySpark, along with experience in CI/CD pipelines (Jenkins, GitHub, Terraform). Familiar with machine learning libraries such as TensorFlow and Keras, and skilled in data visualization tools like Power BI/Fabric and Matplotlib. Expertise in data wrangling, including cleaning, preprocessing, and transformation, with a solid foundation in statistics and probability. Excellent communication skills for engaging with technical and non-technical audiences across all organizational levels. Experience in UI development, translating UX/UI designs into code, data warehousing concepts, API development and integration, and workflow orchestration tools is desired. Education Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics, or a related field Relevant certifications in data science and machine learning are a plus About the team Our Technology Services Group (TSG) Team is responsible for delivering Innovative, data driven tech solutions. We build solutions that power analytics, enable machine learning, and provide critical insights across the organization. By joining our team, you will be part of exciting work in building scalable, next-generation data solutions that directly impact business strategy. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.
Posted 2 weeks ago
4.0 years
8 - 10 Lacs
Gurgaon
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description As an airline, safety is our most important principle. And our Corporate Safety team is responsible for making sure safety is top of mind in every action we take. From conducting flight safety investigations and educating pilots on potential safety threats to implementing medical programs and helping prevent employee injuries, our team is instrumental in running a safe and successful airline for our customers and employees. Job overview and responsibilities Corporate safety is integral for ensuring a safe workplace for our employees and travel experience for our customers. This role is responsible for supporting the development and implementation of a cohesive safety data strategy and supporting the Director of Safety Management Systems (SMS) in growing United’s Corporate Safety Predictive Analytics capabilities. This Senior Analyst will serve as a subject matter expert for corporate safety data analytics and predictive insight strategy and execution. This position will be responsible for supporting new efforts to deliver insightful data analysis and build new key metrics for use by the entire United Safety organization, with the goal of enabling data driven decision making and understanding for corporate safety. The Senior Analyst will be responsible for becoming the subject matter expert in several corporate safety specific data streams and leveraging this expertise to deliver insights which are actionable and allow for a predictive approach to safety risk mitigation. Develop and implement predictive/prescriptive data analytics workflows for Safety Data Management and streamlining processes Collaborate with Digital Technology and United Operational teams to analyze, predict and reduce safety risks and provide measurable solutions Partner with Digital Technology team to develop streamlined and comprehensive data analytics workstreams Support United’s Safety Management System (SMS) with predictive data analytics by designing and developing statistical models Manage and maintain the project portfolio of SMS data team Areas of focus will include, but are not limited to: Predictive and prescriptive analytics Train and validate models Creation and maintenance of standardized corporate safety performance metrics Design and implementation of new data pipelines Delivery of prescriptive analysis insights to internal stakeholders Design and maintain new and existing corporate safety data pipelines and analytical workflows Create and manage new methods for data analysis which provide prescriptive and predictive insights on corporate safety data Partner with US and India based internal partners to establish new data analysis workflows and provide analytical support to corporate and divisional work groups Collaborate with corporate and divisional safety partners to ensure standardization and consistency between all safety analytics efforts enterprise wide Provide support and ongoing subject matter expertise regarding a set of high priority corporate safety datasets and ongoing analytics efforts on those datasets Provide tracking and status update reporting on ongoing assignments, projects, and efforts to US and India based leaders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree Bachelor's degree in computer science, data science, information sytems, engineering, or another quantitative field (i.e. mathematics, statistics, economics, etc.) 4+ years experience in data analytics, predictive modeling, or statistics Expert level SQL skills Experience with Microsoft SQL Server Management Studio and hands-on experience working with massive data sets Proficiency writing complex code using both traditional and modern technologies/languages (i.e. Python, HTML, Javascript, Power Automate, Spark Node, etc.) for queries, procedures, and analytic processing to create useable data insight Ability to study/understand business needs, then design a data/technology solution that connects business processes with quantifiable outcomes Strong project management and communication skills 3-4 years working with complex data (data analytics, information science, data visualization or other relevant quantitative field Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree ML / AI experience Experience with PySpark, Apache, or Hadoop to deal with massive data sets
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Associate Managing Consultant, Strategy & Transformation Overview: Associate Managing Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Manage deliverable development and workstreams on projects across a range of industries and problem statements Contribute to and/or develop analytics strategies and programs for large, regional, and global clients by leveraging data and technology solutions to unlock client value Manage working relationship with client managers, and act as trusted and reliable partner Create predictive models using segmentation and regression techniques to drive profits Review analytics end-products to ensure accuracy, quality and timeliness. Proactively seek new knowledge and structures project work to facilitate the capture of Intellectual Capital with minimal oversight Team Collaboration & Culture Develop sound business recommendations and deliver effective client presentations Plan, organize, and structure own work and that of junior project delivery consultants to identify effective analysis structures to address client problems and synthesize analyses into relevant findings Lead team and external meetings, and lead or co-lead project management Contribute to the firm's intellectual capital and solution development Grow from coaching to enable ownership of day-to-day project management across client projects, and mentor junior consultants Develop effective working relationships with local and global teams including business partners Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Knowledge of metrics, measurements, and benchmarking to complex and demanding solutions across multiple industry verticals Data and analytics experience such as working with data analytics software (e.g., Python, R, SQL, SAS) and building, managing, and maintaining database structures Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred qualifications Additional data and analytics experience working with Hadoop framework and coding using Impala, Hive, or PySpark or working with data visualization tools (e.g., Tableau, Power BI) Experience managing tasks or workstreams in a collaborative team environment Experience coaching junior delivery consultants Relevant industry expertise MBA or master’s degree with relevant specialization (not required) Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.
Posted 2 weeks ago
6.0 years
7 - 8 Lacs
Hyderābād
On-site
Full-time Employee Status: Regular Role Type: Hybrid Department: Analytics Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Senior Data Engineer is responsible for design, develop and support ETL data pipelines solutions primary in AWS environment Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Work as part of a team to build out and support data warehouse, implement solutions using PySpark to process structured and unstructured data. Play key role in building out a semantic layer through development of ETLs and virtualized views. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Support existing ETL processes written in SQL, or leveraging third party APIs with Python, troubleshoot and resolve production issues. Strong SQL and data to understand and troubleshoot existing complex SQL. Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for orchestration of data pipelines Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Troubleshoot and resolve data, system, and performance issues Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Qualifications you have completed graduation from BE/Btech 6 to 9 years of experience in Data Engineering development 5 years of experience in Python scripting You should have 8 years experience in SQL, 5+years in Datawarehouse, 5yrs in Agile and 3yrs with Cloud 3 years of experience with AWS ecosystem (Redshift, EMR, S3, MWAA) 5 years of experience in Agile development methodology You will work with the team to create solutions Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. #LI-Onsite Benefits Experian care for employee's work life balance, health, safety and wellbeing. 1) In support of this endeavor, we offer the best family well-being benefits, 2) Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together
Posted 2 weeks ago
9.0 years
15 Lacs
India
On-site
Experience- 9+ years Location: Pune, Hyderabad (Preferred) JD- Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing – XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.), Job Types: Full-time, Permanent Pay: From ₹1,500,000.00 per year Schedule: Fixed shift Application Question(s): How many years of total experience do you currently have? How many years of experience do you have with Azure data services? How many years of experience do you have with Azure Databricks? How many years of experience do you have with PySpark? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? Are you comfortable attending L2 interview face to face in Hyderabad or Pune office? What is your current and preferred location?
Posted 2 weeks ago
4.0 - 8.0 years
12 - 16 Lacs
Pune
Work from Office
Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.
Posted 2 weeks ago
5.0 - 8.0 years
12 - 18 Lacs
Pune
Work from Office
Job Description We are seeking a highly skilled and experienced Data Engineering professional for our data engineering team. The ideal candidate will have extensive hands-on experience with the Microsoft Azure technology stack, including Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Synapse Analytics, and other related services. This role requires a strong focus on data management, data engineering, and governance, ensuring the delivery of high-quality data solutions to support business objectives. Key Responsibilities: Technical Oversight & Delivery : Provide technical guidance and support to team members, promoting best practices and innovative solutions Oversee the planning, execution, and delivery of data engineering projects, ensuring alignment with business goals and timelines. Data Engineering: Design, develop, and maintain scalable and robust data pipelines using Azure Data Factory, Azure Databricks, and other Azure services. Implement ETL/ELT processes to ingest, transform, and load data from various sources into data lakes and data warehouses (specifically sources includes Excel, SAP HANA, APIs and SQL server). Optimize data workflows for performance, scalability, and reliability. Data Management: Ensure data quality, integrity, and consistency across all data platforms. Manage data storage, retrieval, and archiving solutions, leveraging Azure Blob Storage, Azure Data Lake, and Azure SQL Database. Develop and enforce data management policies and procedures. Data Governance: Establish and maintain data governance frameworks, including data cataloging, lineage, and metadata management. Implement data security and privacy measures, ensuring compliance with relevant regulations and industry standards. Monitor and audit data usage, access controls, and data protection practices. Collaboration & Communication: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions. Communicate complex technical concepts to non-technical stakeholders, ensuring transparency and alignment. Provide regular updates and reports on data engineering activities, progress, and challenges. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Strong hands-on experience more than 5 years with the Microsoft Azure technology stack, including but not limited to: Azure Data Factory Azure Databricks Azure SQL Database Azure Synapse Analytics Azure Data Lake Storage Proficiency in programming languages such as SQL, Python, and Scala. Experience with data modeling, ETL/ELT processes, Medallion Architecture, and data warehousing solutions. Solid understanding of data governance principles, data quality management, and data security best practices. Excellent problem-solving skills and the ability to work in a fast-paced, dynamic environment. Strong communication, leadership, and project management skills. Preferred Qualifications: Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Microsoft Certified: Azure Solutions Architect Expert. Experience with other data platforms and tools such as Power BI, Azure Machine Learning, and Azure DevOps. Familiarity with big data technologies and frameworks like Hadoop and Spark.
Posted 2 weeks ago
170.0 years
0 Lacs
Hyderābād
On-site
Country/Region: IN Requisition ID: 27524 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Azure Databricks Developer Description: Area(s) of responsibility About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Azure Data Engineer with Databricks (7+ Years) Experience: 7+ Years Job Description: Experience in Perform Design, Development & Deployment using Azure Services ( Data Factory, Azure Data Lake Storage, Databricks, PySpark, SQL) Develop and maintain scalable data pipelines and build out new Data Source integrations to support continuing increases in data volume and complexity. Experience in create the Technical Specification Design, Application Interface Design. Files Processing – XML, CSV, Excel, ORC, Parquet file Formats Develop batch processing, streaming and integration solutions and process Structured and Non-Structured Data Good to have experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies (Informatica preferred) Demonstrated in depth skills with Azure Data Factory, Azure Databricks, PySpark, ADLS (must have) with the ability to configure and administrate all aspects of Azure SQL DB. Collaborate and engage with BI & analytics and business team Deep understanding of the operational dependencies of applications, networks, systems, security and policy (both on premise and in the cloud; VMs, Networking, VPN (Express Route), Active Directory, Storage (Blob, etc.),
Posted 2 weeks ago
5.0 years
6 - 9 Lacs
Hyderābād
Remote
Overview: Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities: Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques – enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications: Bachelor’s degree in computer science, MIS, Business Management, or related field 5+ years’ experience in Information Technology 4+ years’ experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation
Posted 2 weeks ago
0 years
0 Lacs
Hyderābād
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. About profile – Smart Manufacturing and AI (Data Science Engineer) Micron Technology’s vision is to transform how the world uses information to enrich life and our commitment to people, innovation, tenacity, collaboration, and customer focus allows us to fulfill our mission to be a global leader in memory and storage solutions. This means conducting business with integrity, accountability, and professionalism while supporting our global community. Describe the function of the role and how it fits into your department? As a Data Science Engineer at Micron Technology Inc., you will be a key member of a multi-functional team responsible for developing and growing Micron’s methods and systems for applied data analysis, modeling and reporting. You will be collaborating with other data scientists, engineers, technicians and data mining teams to design and implement systems to transform and process data extracted from Micron’s business systems, applying advanced statistical and mathematical methods to analyze the data, creating diagnostic and predictive models, and creating dynamic presentation layers for use by high-level engineers and managers throughout the company. You will be creating new solutions, as well as, supporting, configuring, and improving existing solutions. Why would a candidate love to work for your group and team? We are a Smart Manufacturing and AI organization with a goal to spearhead Industry 4.0 transformation and enable accelerated intelligence and digital operations in the company. Our teams deal with projects to help solve complex real-time business problems that would significantly help improve yield, cycle time, quality and reduce cost of our products. This role also gives a great opportunity to closely work with data scientists, I4.0 analysts and engineers and with the latest big data and cloud-based platforms/skillsets. We highly welcome new ideas and are large proponent of Innovation. What are your expectations for the position? We are seeking Data Science Engineers who are highly passionate about data and associated analysis techniques, can quickly adapt to learning new skills and can design/implement state-of-art Data Science and ML pipelines on-prem and on cloud. You will interact with experienced Data Scientists, Data Engineers, Business Areas Engineers, and UX teams to identify questions and issues for Data Science, AI and Advanced analysis projects and improvement of existing tools. In this position, you will help develop software programs, algorithms and/or automated processes to transform and process data from multiple sources, to apply statistical and ML techniques to analyze data, to discover underlying patterns or improve prediction capabilities, and to deploy advanced visualizations on modern UI platforms. There will be significant opportunities to perform exploratory and new solution development activities Roles & responsibilities can include but are not limited to: Broad knowledge and experience in: Strong desire to grow career as Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Ability to apply latest mathematical and statistical techniques to analyze data and uncover patterns. Interested to build web application as part of job scope. Knowledge in Cloud based Analytics and Machine Learning Modeling Knowledge in building APIs for application integration. Knowledge in the areas: statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. Data Analysis and Validation skills Strong software development skills. Above average skills in: Programming Fluency in Python Knowledge in statistics, Machine learning and other advanced analytical methods Knowledge in javascript, AngularJS 2.0, Tableau will be added advantage. Knowledge in OOPS background is added advantage. Understanding of pySpark and/or libraries for distributed and parallel processing is added advantage. Knowledge in Tensorflow, and/or other statistical software including scripting capability for automating analyses Knowledge with time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Understanding of Manufacturing Execution Systems (MES) is a plus Demonstrated ability to: Work in a dynamic, fast-paced, work environment Self-motivated with the ability to work under minimal direction To adapt to new technologies and learn quickly A passion for data and information with strong analytical, problem solving, and organizational skills Work in multi-functional groups, with diverse interests and requirements, to a common objective Communicate very well with distributed teams (written, verbal and presentation) Education: Bachelor’s or Master’s Degree in Computer Science,Mathematics, Computer Science, Data Science and Physics. CGPA requirements = 7.0 CGPA & Above About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will be: Provide support across the end-to-end delivery and run lifecycle, utilising your skills and expertise to carry out software development, testing and operational support activities with the ability to move between these according to demand End to end accountability for a module or part of a product or service, identifying and developing the most appropriate Technology solutions to meet customer needs as part of the Customer Journey Liaise with other engineers, architects, and business stakeholders to understand and drive the product or service’s direction Establish a digital environment and automate processes to minimize variation and ensure predictable high quality code and data Create technical test plans and records, including unit and integration tests, within automated test environments to ensure code quality Provide support to DevOps teams working at all stages of a product or service release/change with a strong customer focus and end to end journeys, ensuring they have an excellent domain knowledge. Working with Ops, Dev and Test Engineers to ensure operational issues (performance, operator intervention, alerting, design defect related issues, etc.) are identified and addressed at all stages of a product or service release / change Provide support in identification and resolution of all incidents associated with the IT service to proactively handle production support activities and areas of improvements. Ensure service resilience, service sustainability and recovery time objectives are met for all the software solutions delivered Responsible for support and maintenance to continuous integration / continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in handling distributed / big data projects. Requires proficiency in Pyspark, Linux scripting, SQL and Bigdata tools. Technology stack – Pyspark, ETL, Unix Shell Scripting, Python, Spark, SQL, Big data tools – Hadoop, Hive, DevOps tools Strong exposure in interpretation of business requirements from a technical perspective. Design, develop and implement IT solutions that fulfill business users' requirements and conform to a high level of quality standard. Experience with cloud platform. Sound problem-solving skills and attention to detail. Strong communication, presentation and team collaboration skills. Knowledge of Automation and DevOps practices and tools like Docker, Kubernetes, Jenkins, Ansible, G3, Nexus, Git, test automation Familiarity with agile development methodologies using Jira You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 2 weeks ago
7.0 years
20 Lacs
India
On-site
Job Description : EXP : 7 Years Location : Hyderabad We are seeking a skilled and dynamic Azure Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and maintaining data pipelines and working with large datasets on the Azure cloud platform. The Azure Data Engineer will be responsible for developing and implementing efficient ETL processes, working with data warehouses, and leveraging cloud technologies such as Azure Data Factory (ADF), Azure Databricks, PySpark, and SQL to process and transform data for analytical purposes. Key Responsibilities : - Data Pipeline Development : Design, develop, and implement scalable, reliable, and high-performance data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark. - Data Processing : Develop complex data transformations, aggregations, and cleansing processes using PySpark and Databricks for big data workloads. - Data Integration : Integrate and process data from various sources such as databases, APIs, cloud storage (e.g., Blob Storage, Data Lake), and third-party services into Azure Data Services. - Optimization : Optimize data workflows and ETL processes to ensure efficient data loading, transformation, and retrieval while ensuring data integrity and high performance. - SQL Development : Write complex SQL queries for data extraction, aggregation, and transformation. Maintain and optimize relational databases and data warehouses. - Collaboration : Work closely with data scientists, analysts, and other engineering teams to understand data requirements and design solutions that meet business and analytical needs. - Automation & Monitoring : Implement automation for data pipeline deployment and ensure monitoring, logging, and alerting mechanisms are in place for pipeline health. - Cloud Infrastructure Management : Work with cloud technologies (e.g., Azure Data Lake, Blob Storage) to store, manage, and process large datasets. - Documentation & Best Practices : Maintain thorough documentation of data pipelines, workflows, and best practices for data engineering solutions. Job Type: Full-time Pay: Up to ₹2,000,000.00 per year Work Location: In person
Posted 2 weeks ago
4.0 - 7.0 years
18 - 20 Lacs
Pune
Hybrid
Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com
Posted 2 weeks ago
0 years
9 - 9 Lacs
Chennai
On-site
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job description Join us as a PySpark And Big Data Developer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll also be: Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need a background in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You'll need at least six years of experience in PySpark, SQL, Snowflake and Big Data. You'll also need experience in JIRA, Confluence and REST API Call. Experience working with AWS in Financial domain is desired. You’ll also need: Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps and Agile methodology and associated toolset Developing Unit Test Cases and executing them Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Chennai
On-site
Job Description: About us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview Global Markets Technology & Operations provides end-to-end technology solutions for Markets business including Equity, Prime Brokerage, Interest Rates, Currencies, Commodities, Derivatives and Structured Products. Across all these products, solutions include architecture, design, development, change management, implementation and support using various enterprise technologies. In addition, GMT&O provides Sales, Electronic Trading, Trade Workflow, Pricing, and Market Risk, Middle office, Collateral Management, Credit Risk, Post-trade confirmation, Settlement and Client service processes for Trading, Capital Markets, and Wealth Management businesses. ERTF – CFO is responsible for the technology solutions and platforms that support Chief Financial Officer (CFO) Group, including Global Financial Control, Corporate Treasury, Financial Forecasting, Enterprise Cost Management, Investor Relations, and Line of Business Finance functions (BFO). Increased demand for integrated and streamlined Business Finance management solutions has resulted in a few initiatives. The initiatives span across Subledger Simplification. AML Detection Channel Platform(ADCP) application is an AML monitoring tool used by GFCC to identify any suspicious activity, like - Money laundering & Fraud which requires compliance review. AML Alert Reconciliation Process (ARL) is one other application used by GFCC which receives alerts from various Detection Channels (AML, Fraud, etc), removes alert noise, enriches alerts, obtains attributes, decides (rule based) if the alert meets criteria and transforms the alert into an event. Job Description: This job is responsible for developing and delivering complex requirements to accomplish business goals. Key responsibilities of the job include ensuring that software is developed to meet functional, non-functional and compliance requirements, and solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Job expectations include a strong knowledge of development and testing practices common to the industry and design and architectural patterns. Responsibilities: Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements Designs, develops, and modifies architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained Mentors other software engineers and coach team on Continuous Integration and Continuous Development (CI-CD) practices and automating tool stack Executes story refinement, definition of requirements, and estimating work necessary to realize a story through the delivery lifecycle Performs spike/proof of concept as necessary to mitigate risk or implement new ideas Automates manual release activities Designs, develops, and maintains automated test suites (integration, regression, performance) Manager of Process & Data: Demonstrates and expects process knowledge, data driven decisions, simplicity and continuous improvement. Enterprise Advocate & Communicator: Delivers clear and concise messages that motivate, convey the “why” and connects contributions to business results. Risk Manager: Leads and encourages the identification, escalation and resolution of potential risks. People Manager & Coach: Knows and develops team members through coaching and feedback. Financial Steward: Manages expenses and demonstrates an owner’s mindset. Enterprise Talent Leader: Recruits, on-boards and develops talent, and supports talent mobility for career growth. Driver of Business Outcomes: Delivers results through effective team management, structure, and routines. Requirements Education- BE/ BTECH/ ME/ MTECH Certifications if any- NA Experience range- 4- 6 years Foundational Skills Ability to work in multi technology project PYSPARK, Teradata, SQL, Unix Shell scripting Excellent oral/written communication skills / Project Management Skills Desired Skills Organized and able to multi-task in a fast-paced environment Highly motivated, able to work independently, self-starter; and problem/solving/analytical Excellent interpersonal skills; positive attitude; team player; flexible Willingness to learn and adapt to changes Shift Timings- 11:00 am to 8:00 pm Location- Chennai
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Primary Skills : Pyspark, Spark and proficient in SQL Secondary Skills : Scala and Python Experience : 3 + Yrs Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education At least 5 years of experience in Pyspark, Spark with Hadoop distributed frameworks while handling large amount of big data using Spark and Hadoop Ecosystems in Data Pipeline creation , deployment , Maintenance and debugging Experience in scheduling and monitoring Jobs and creating tools for automation At least 4 years of experience with Scala and Python required. Proficient knowledge of SQL with any RDBMS. Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels. Ability to work within deadlines and effectively prioritize and execute on tasks. Preferred Qualifications: At least 1 years of AWS development experience is preferred Experience in Drive automations DevOps Knowledge is an added advantage. Advanced conceptual understanding of at least one Programming Language Advanced conceptual understanding of one database and one Operating System Understanding of Software Engineering with practice in at least one project Ability to contribute in medium to complex tasks independently Exposure to Design Principles and ability to understand Design Specifications independently Ability to run Test Cases and scenarios as per the plan Ability to accept and respond to production issues and coordinate with stake holders Good understanding of SDLC Analytical abilities Logical thinking Awareness of latest technologies and trends
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.
Posted 2 weeks ago
3.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About US At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview ARQ supports global businesses of the Bank with solutions requiring judgment application, sound business understanding and analytical perspective. The domain experience in the areas of Financial Research & Analysis, Quantitative Modeling, Risk Management and Prospecting Support provide solutions for revenue enhancement, risk mitigation and cost optimization. The division comprising of highly qualified associates operates from three locations i.e. Mumbai, GIFT City, Gurugram and Hyderabad. Job Description Individual should be capable of running technical processes relating to execution of models across an Enterprise portfolio. This will involve familiarity with technical infrastructure (specifically GCP and Quartz), coding languages and the model development and software development lifecycle. In addition, there is opportunity for the right candidates to support the implementation of new processes into target-state, as well as explore ways to make the processes more efficient and robust. Specifically: Manage model execution, results analysis and reporting related to AMGQS models. The Analyst will also work with the implementation team to ensure that this critical function is well controlled Responsibilities Write Python and/or PySpark code to automate production processes of several risk and loss measurement statistical models. Example of model execution production processes are error attribution, scenario shock, sensitivity, result publication and reporting. Leverage skills in quantitative methods to conduct ongoing monitoring of model performance. Also, possess capabilities in data science and data visualization techniques and tools Identify, analyze, monitor, and present risk factors and metrics to, and integrate with, business partners Proactively solve challenges with process design deficiencies, implementation and remediation efforts Perform operational controls, ensuring consistency and compliance across all functions, including procedures, critical use spreadsheets and tool inventory Assist with enhancing overall governance environment within the Operations space Partner with the IT team to perform system related design assessments, control effectiveness testing, process testing, issue resolution monitoring and supporting the sign-off by management of processes and controls in scope Work with model implementation experts and technology teams to design and integrate Python workflows into existing in-house target-state platform for process execution Requirements : Experience : 3 to 8 years Education: Graduate / Post Graduate from Tier 1 institutes - Bachelor's or master’s degree in mathematics, engineering, physics, statistics, or financial mathematics/engineering Foundational skills* Good understanding in numerical analysis, probability theory, linear algebra, and stochastic analysis Proficiency in Python (numpy, pandas, OOP, unittest) and Latex. Prior experience in git, bitbucket, agile view is a plus Understanding of Credit Risk modelling and processes Integrates seamlessly across complex set of stakeholders, internal partners, external resources Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Ability to thrive in a fast-paced, dynamic environment and adapt to evolving priorities and requirements. Desired Skills:- Excellent communication and collaboration abilities Work Location: Hyderabad & Mumbai Work Timings: 11am to 8pm IST
Posted 2 weeks ago
6.0 years
0 Lacs
Calcutta
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: We are seeking a highly skilled and experienced Python developer with 6-7 years of hands-on experience in software development. Key Responsibilities: - Design, develop, test and maintain robust and scalable backend applications using FastAPI deliver high- performance APIs. - Write reusable efficient code following best practices - Collaborate with cross-functional teams, integrate user-facing elements with server-side logic - Architect and implement distributed, scalable microservices leveraging Temporal workflows for orchestrating complex processes. - Participate in code reviews and mentor junior developers - Debug and resolve technical issues and production incidents - Follow agile methodologies and contribute to sprint planning and estimations - Strong communication and collaboration skills - Relevant certifications are a plus Required Skills: - Strong proficiency in Python 3.x. - Collaborate closely with DevOps to implement CI/CD pipelines for Python projects, ensuring smooth deployment to production environments. - Integrate with various databases (e.g., Cosmos DB,) and message queues (e.g., Kafka, eventhub) for seamless backend operations. - Experience in one or more Python frameworks (Django, Flask, FastAPI) - Develop and maintain unit and integration tests using frameworks like pytest and unittest to ensure code quality and reliability. - Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Python services. - Familiarity with asynchronous programming (e.g., asyncio, aiohttp) and event-driven architectures. - Strong skill in PySpark for large-scale data processing - Solid understanding of Object-Oriented Programming and design principles - Proficient in using version control systems like Git Mandatory skill sets: Python Developer Preferred skill sets: Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Years of experience required: 4-7 Years Education qualification: B.Tech/B.E./MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience: 5 to 7 years Location: Bengaluru, Gurgaon, Pune About Us: AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors. We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you. Job Summary : We are seeking an experienced and motivated Data Engineer with a strong background in Python, PySpark, and SQL, to join our growing data engineering team. The ideal candidate will have hands-on experience with cloud data platforms, data modelling, and a proven track record of building and optimising large-scale data pipelines in agile environments. Key Responsibilities : *Design, develop, and maintain robust data pipelines using Python, PySpark, and SQL. *Strong understanding of data modelling. *Proficient in using code management tools such as Git and GitHub. *Strong knowledge of query performance tuning and optimisation techniques. Role Requirements and Qualifications: *5+ years' experience as a data engineer in complex data ecosystem. *Extensive experience working in an agile environment. *Experience with cloud data platforms like AWS Redshift, Databricks. *Excellent problem-solving and communication skills. Why Join Us: *Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors. *Continuous investment in employee growth and professional development with a strong focus on up & re-skilling. *Competitive compensation & benefits, ESOPs and international assignments. *Supportive environment with healthy work-life balance and a focus on employee well-being. *Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.
Posted 2 weeks ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France