Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive front end and Big Data applications. The firm is making a significant investment to uplift and rebuild the Compliance application portfolio. To achieve this Compliance Engi neering is looking to fill several Systems Engineer roles . How You Will Fulfill Your Potential As a member of our team, you will: Partner globally with users, development teams, and engineering colleagues across multiple divisions to facilitate onboarding of new business initiatives, test and validate Compliance Surveillance coverage. Learn from experts, train and mentor team members, Leverage various technologies including Java, Python, PySpark and other Bigdata technologies in delivering the solutions, Be able to innovate and incubate new ideas, Be involved in the full life cycle; prioritization, defining, designing, implementing, testing, deploying, and maintaining software across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in Java development, debugging and problem solving. Experience in delivery or project management. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in some of the following is desired and can set you apart from other candidates: Relational databases Hadoop and bigdata technologies knowledge of the financial industry (specifically in Capital Markets domain) and compliance or risk functions About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Show more Show less
Posted 1 week ago
3.0 - 6.0 years
12 - 22 Lacs
Noida
Work from Office
About CloudKeeper CloudKeeper is a cloud cost optimization partner that combines the power of group buying & commitments management, expert cloud consulting & support, and an enhanced visibility & analytics platform to reduce cloud cost & help businesses maximize the value from AWS, Microsoft Azure, & Google Cloud. A certified AWS Premier Partner, Azure Technology Consulting Partner, Google,Cloud Partner, and FinOps Foundation Premier Member, CloudKeeper has helped 400+ global companies save an average of 20% on their cloud bills, modernize their cloud set-up and maximize value all while maintaining flexibility and avoiding any long-term commitments or cost. CloudKeeper hived off from TO THE NEW, digital technology services company with 2500+ employees and an 8-time GPTW winner. Position Overview: We are looking for an experienced and driven Data Engineer to join our team. The ideal candidate will have a strong foundation in big data technologies, particularly Spark, and a basic understanding of Scala to design and implement efficient data pipelines. As a Data Engineer at CloudKeeper, you will be responsible for building and maintaining robust data infrastructure, integrating large datasets, and ensuring seamless data flow for analytical and operational purposes. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources. Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability. Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing. Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements. Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform. Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms. Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions. Automate data workflows and tasks using appropriate tools and frameworks. Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness. Implement data security best practices, ensuring data privacy and compliance with industry standards. Required Qualifications: 4- 6 years of experience required as a Data Engineer or an equivalent role Strong experience working with Apache Spark with Scala for distributed data processing and big data handling. Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs. Proficiency in SQL for querying and manipulating large datasets.ing technologies. Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions. Strong knowledge of data modeling, ETL processes, and data pipeline orchestration. Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions. Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus. Experience with version control systems such as Git. Strong problem-solving abilities and a proactive approach to resolving technical challenges. Excellent communication skills and the ability to work collaboratively within cross-functional teams.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. Experience of Service Oriented Architecture (SOA, web services, REST) Experience of agile development & GCP BigQuery Have experience in general tools and techniques: E.g. Docker, K8s, GIT, Argo WorkFlow The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 759817 Show more Show less
Posted 1 week ago
4.0 - 7.0 years
15 - 25 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities Looking For AWS data Eng-Immediate joiners for Hyderabad,Chennai,Noida,Pune,Bangalore locations. Mandatory Skill-Python,Pyspark,SQL,Aws Glue Strong technical skills in services like S3,Athena, Lambda, RGlue and Glue(Pyspark), SQL, Data Warehousing, Informatica, OracleDesign, develop, and implement custom solutions within the Collibra platform to support data governance initiatives. Preferred candidate profile Snowflake, Agile methodology and Tableau. Proficiency in Python/Scala, Spark architecture, complex SQL, and RDBMS. Hands-on experience with ETL tools (e.g., Informatica) and SCD1, SCD2. 2-6 years of DWH, AWS Services and ETL design knowledge. Develop ETL processes for data ingestion, transformation, and loading into data lakes and warehouses. Collaborate with data scientists and analysts to ensure data availability for analytics and reporting.
Posted 1 week ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Key Responsibilities: Project Management: Lead the end-to-end delivery of data projects, including Data Warehouse, Data Lake, and Lakehouse solutions. Develop detailed project plans, allocate resources, and monitor project progress to ensure timely and within-budget delivery. Identify and mitigate risks, ensuring successful project outcomes. Technical Leadership: Provide technical oversight and guidance on best practices in data engineering, cloud architecture, and data management. Ensure solutions are scalable, robust, and align with industry standards and client requirements. Oversee the design, development, and implementation of data solutions using Azure or AWS and Databricks. Client Engagement: Engage with clients to understand their business needs and translate them into technical requirements. Build and maintain strong relationships with key client stakeholders. Present complex technical concepts and solutions in a clear and concise manner to non- technical stakeholders. Team Leadership: Lead and mentor a team of data engineers, fostering a collaborative and high-performance culture. Provide guidance and support to team members in their professional development and project delivery. Ensure the team is equipped with the necessary tools and resources to succeed. Solution Development: Develop and implement data pipelines, ETL processes, and data integration solutions using Azure Data Factory, AWS Glue, Databricks, and other relevant tools. Optimize data storage and retrieval performance, ensuring data quality and integrity. Leverage advanced analytics and machine learning capabilities within Databricks to drive business insights. Continuous Improvement: Stay up-to-date with the latest advancements in Azure, AWS, Databricks, and data engineeringtechnologies. Implement best practices and continuous improvement initiatives to enhance the efficiency and effectiveness of data engineering processes. Foster a culture of innovation and experimentation within the team. Skills & Competencies Strong problem-solving and analytical skills. Deep technical expertise in Azure, Google or AWS and Databricks. Exceptional project management and organizational abilities. High level of emotional intelligence and client empathy. Proficiency In Concepts of Data warehousing and Data Lake (e.g., SCD1, SCD2, Dimensional Modeling, KPIs and Measures, Data Catalog, Star and Snowflake schema, Delta Table and Delta Live Tables) Data warehousing solutions (e.g., Azure Synapse, Azure SQL, ADLS Gen2, BLOB Storage for Azure and Redshift, S3, AWS Glue AWS Lambda for AWS, Google data management technologies ) Data Lake solutions (e.g., MS Fabric, Purview, AWS Lakehouse, BigQuery and BigTable ) Lakehouse solutions (e.g., Databricks, Unity Catalog, Python and Pyspark) Data visualization tools (e.g., Power BI,Tableau) is a plus Mandatory Skill Sets Project Management, Azure, AWS Preferred Skill Sets Project Management, Azure, AWS Years Of Experience Required 10+ Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Devops, Microsoft Azure, Waterfall Model Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Coaching and Feedback, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment {+ 21 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 1 week ago
5.0 - 10.0 years
7 - 10 Lacs
Hyderābād
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
10.0 - 15.0 years
30 - 40 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities: - Understanding operational needs by collaborating with specialized teams Supporting key business operations. This involves architecture designing, building and deploying data systems, pipelines etc Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Lead a team of developers, implement Sprint planning and executions to ensure timely deliveries Technical Skill, Qualification & experience required:- 9-11 years of experience in Cloud Data Engineering. Experience in Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Data Engineer, Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Bachelors/Master's Degree in Computer Science or related field Design and implement data solutions using medallion architecture, ensuring effective organization and flow of data through bronze, silver, and gold layers. Optimize data storage and processing strategies to enhance performance and data accessibility across various stages of the medallion architecture. Collaborate with data engineers and analysts to define data access patterns and establish efficient data pipelines. Develop and oversee data flow strategies to ensure seamless data movement and transformation across different environments and stages of the data lifecycle. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills * Immediate Joiners will be preferred only
Posted 1 week ago
2.0 years
3 - 9 Lacs
Mumbai
On-site
You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Consumer & Community Banking Rewards Team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities Executes standard software solutions, design, development, and technical troubleshooting Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture. Design & develop data pipelines end to end using PySpark, Java, Python and AWS Services. Utilize Container Orchestration services including Kubernetes, and a variety of AWS tools and services. Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2 years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Hands-on practical experience in developing spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark & Spark Streaming. Proficient in coding in one or more Coding languages – Core Java, Python and Pyspark Experience with Relational and Datawarehouse databases, Cloud implementation experience with AWS including: AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Airflow (or) Lambda + Step Functions + Event Bridge Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager Proficiency in automation and continuous delivery methods. Preferred qualifications, capabilities, and skills Experience in Snowflake nice to have. Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security. In-depth knowledge of the financial services industry and their IT systems. Practical cloud native experience preferably AWS.
Posted 1 week ago
0 years
5 - 9 Lacs
Bengaluru
On-site
Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job description Join us as a Principal Engineer - PySpark This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective You’ll actively contribute to our architecture, design and engineering centre of excellence, collaborating to improve the bank’s overall software engineering capability You’ll gain valuable stakeholder exposure as you build and leverage relationships, as well as the opportunity to hone your technical talents We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be working with software engineers in the production and prototyping of innovative ideas, engaging with domain and enterprise architects to validate and leverage these in wider contexts, by incorporating the relevant architectures. We’ll also look to you to design and develop software with a focus on the automation of build, test and deployment activities, while developing the discipline of software engineering across the business. You’ll also be: Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Designing, producing, testing and implementing the working code, along with applying Agile methods to the development of software with the use of DevOps techniques The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll need at least ten years of experience working with Python or PySpark with at least four years of team handling experience. You'll need experience in model development and support with expertise in Spark SQL query optimization and performance tuning. You'll also need experience in writing Advance Spark SQL or ANSI SQL queries. Knowledge of AWS will be highly desired. You’ll also need: A strong background in leading software development teams in a matrix structure, introducing and executing technical strategies Experience in Unix or Linux scripting, Airflow, Continuous Integration, DevOps, GIT and Artifactory Experience in Agile, Test Driven Development approach and software delivery best practice The ability to rapidly and effectively understand and translate product and business requirements into technical solutions A background of working with code repositories, bug tracking tools and wikis
Posted 1 week ago
3.0 - 6.0 years
3 - 4 Lacs
Bengaluru
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description (Shaded areas for Talent use only) We are seeking a passionate data analyst to transform data into actionable insights and support decision-making in a global organization focused on pricing and commercial strategy. This role spans business analysis, requirements gathering, data modeling, solution design, and visualization using modern tools. The analyst will also maintain and improve existing analytics solutions, interpret complex datasets, and communicate findings clearly to both technical and non-technical audiences. Essential Functions of the Job: (Identify and describe essential functions, or primary duties and responsibilities. Each function should describe WHAT is done and the END RESULT/PURPOSE achieved. Assume that the reader does not know the role or function of the job.) Analyze and interpret structured and unstructured data using statistical and quantitative methods to generate actionable insights and ongoing reports. Design and implement data pipelines and processes for data cleaning, transformation, modeling, and visualization using tools such as Power BI, SQL, and Python. Collaborate with stakeholders to define requirments, prioritize business needs, and translate problems into analytical solutions. Develop, maintain, and enhance scalable analytics solutions and dashboards that support pricing strategy and commercial decision-making. Identify opportunities for process improvement and opperational efficiency through data-driven recommendations. Communicate complex findings in a clear, compelling, and actionable manner to both technical and non-technical audiences. Analytical/Decision Making Responsibilities: (Describe the kind of problems and challenges typically faced, and decisions required to perform the job, as well as recommendations made to supervisors or others. Focus on the nature of existing policies, precedents and procedures used to guide decisions, and the degree to which the incumbent is free to make decisions requiring interpretation and judgment. Provide an example.) Apply a hypothesis-driven approach to analyzing ambiguous or complex data and synthesizing insights to guide strategic decisions. Promote adoption of best practices in data analysis, modeling, and visualization, while tailoring approaches to meet the unique needs of each project. Tackle analytical challenges with creativity and rigor, balancing innovative thinking with practical problem-solving across varied business domains. Prioritize work based on business impact and deliver timely, high-quality results in fast-paced environments with evolving business needs. Demonstrate sound judgement in selecting methods, tools, and data sources to support business objectives. Knowledge and Skills Requirements: (Describe the knowledge or skills needed to perform this job; these may be professional, technical, or managerial) Proven experience as a data analyst, business analyst, data engineer, or similar role. Strong analytical skills with the ability to collect, organize, analyze, and present large datasets accurately. Foundational knowledge of statistics, including concepts like distributions, variance, and correlation. Skilled in documenting processes and presenting findings to both technical and non-technical audiences. Hands-on experience with Power BI for designing, developing, and maintaining analytics solutions. Proficient in both Python and SQL, with strong programming and scripting skills. Skilled in using Pandas, T-SQL, and Power Query M for querying, transforming, and cleaning data. Hands-on experience in data modeling for both transactional (OLTP) and analytical (OLAP) database systems. Strong visualization skills using Power BI and Python libraries such as Matplotlib and Seaborn. Experience with defining and designing KPIs and aligning data insights with business goals. Additional/Optional Knowledge and Skills: (Describe any additional knowledge or skills that, while not required, may be useful or helpful to perform this job; these may be professional, technical, or managerial) Experience with the Microsoft Fabric data analytics environment. Proficiency in using the Apache Spark distributed analytics engine, particularly via PySpark and Spark SQL. Exposure to implementing machine learning or AI solutions in a business context. Familiarity with Python machine learning libraries such as scikit-learn, XGBoost, PyTorch, or transformers. Experience with Power Platform tools (Power Apps, Power Automate, Dataverse, Copilot Studio, AI Builder). Knowledge of pricing, commercial strategy, or competitive intelligence. Experience with cloud-based data services, particularly in the Azure ecosystem (e.g., Azure Synapse Analytics or Azure Machine Learning). Supervision Responsibilities: (Describe the level of supervision received, i.e., the frequency of supervisory contact, degree to which the individual acts independently and on what kinds of issues. Describe the level of supervision of others, if any, i.e., assigning work, reviewing performance, direct or indirect responsibility). Operates with a high degree of independence and autonomy. Collaborates closesly with cross-functional teams including sales, pricing, and commercial strategy. Mentors junior team members, helping develop technical skills and business domain knowledge. Other Requirements: (Describe any miscellaneous functions or expectations of the job that are important to note) Collaborates with a team operating primarily in the Eastern Time Zone (UTC 4:00 / 5:00). Limited travel may be required for this role. Job Requirements: Education: (What is the minimum level of education or equivalent experience needed/suggested to perform this job effectively?) A bachelor’s degree in a STEM field relevant to data analysis, data engineering, or data science is required. Examples include (but are not limited to) computer science, statistics, data analytics, artificial intelligence, operations research, or econometrics. Experience: (What is the minimum number or range of years needed to perform this job?) 3–6 years of experience in data analysis, data engineering, or a closely related field, ideally within a professional services enviornment. Certification Requirements: (Describe and explain any certifications and/or licenses needed or helpful to perform this job). No certifications are required for this role. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
2.0 - 3.0 years
4 - 6 Lacs
Bengaluru
On-site
Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.
Posted 1 week ago
4.0 years
10 - 17 Lacs
India
On-site
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Summary AdvancedAnalyticsData Visualizer Doyou thriveondevelopingcreativeandinnovativeinsights tosolvecomplex challenges?Want toworkon next- generation, cutting-edge products and services that deliver outstanding value and that are global in vision and scope? Work with other experts in your field. Work for a world-class organization that provides an exceptional career experience with an inclusive and collaborative culture. Wanttomakeanimpactthatmatters?ConsiderDeloitte Global. As a key member of theadvanced analytics team within DeloitteGlobal thecandidatewill partner with thedata scienceandstrategyteamstoprovidethedesignanddevelopment ofdata visualizationstohighlight keyinsights and findings to the business. Goodpresentation, technicalandcommunicationskillsareessentialtodistilcomplexdata intoeasyto understand messaging for all levels of leadership Dutiesand Responsibilities The Power BI SME Lead is a senior position requiring 6-10 years of experience in BI Engineering,. The role involves leading a team of problem solvers/work as individual to address complex business challenges using data, analytics, and insights. Collaborate with business stakeholders to understand their requirements and translate them into technical specifications. The candidate will be responsible for strategy execution, managing project deliverables, ensuring adherence to SLAs, and fostering team collaboration. The ideal candidate will demonstrate leadership by directly working with clients, managing risks, and contributing to the company's Centre of Excellence activities. A bachelor's degree in computer science, IT, or a related field is required, along with strong technical skills in Power BI and other analytics tools. RequiredTechnical Skills Primary Skills: Power BI Desktop, Power Query, Power BI Service, Data Visualization, DAX, MS Excel, Data Analytics Develop and manage ETL processes to extract, transform, and load data from various sources into Power BI. Automate data refresh and update processes to ensure timely availability of data. Conduct internal training in BI Tools and Provides technical expertise to visualize data aspect and work as SME Familiarity with data analytics tools such as Python, SQL, and PySpark. Minimum 7+ years of experience delivering managed data and analytics programs. Excellent communication, problem-solving, quantitative, and analytical skills. Good to Know: Tableau Desktop, Tableau Prep Builder, Qlik, SQL, Advanced Excel, Excel Macro Good to have: Certifications in Power BI and other BI tools. Education 7-9+ years industryexperiencewithabachelor’s in computer scienceor related field Howyouwill Grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our peopleto develop in their own way, playing to their own strengths as they honetheir leadershipskills. And, asa part of our efforts, we provide our professionals with a variety of learning and networking opportunities—includingexposuretoleaders, sponsors, coaches, andchallengingassignments—tohelpaccelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources, includingliveclassrooms, team-basedlearning, andeLearning. DU:TheLeadership Center inIndia, our state-of- the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits AtDeloitte, weknowthatgreatpeoplemakea greatorganization. Wevalueour peopleandoffer employeesa broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individualsbyrecognizingtheir uniqueness andoffering themtheflexibilitytomakedailychoicesthat canhelp themtobehealthy, centered, confident, andaware. Weoffer well-beingprograms andarecontinuouslylooking for new ways to maintaina culturethat is inclusive, invites authenticity, leverages our diversity, andwhere our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships withour clients, our people,andour communities. Webelievethatbusinesshasthepower toinspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited (“DTTL”), its global network of member firms, and their related entities (collectively, the “Deloitte organization”). DTTL (also referred to as “Deloitte Global”) and each of its member firms and related entities are legally separate and independent entities, which cannot obligate or bind each other in respect of third parties. DTTL and each DTTL member firmand related entityisliableonlyfor itsownactsandomissions, and not thoseof eachother. DTTLdoesnot provideservicesto clients. Please see www.deloitte.com/about to learn more. Thiscommunicationcontains generalinformationonly,andnoneofDeloitteToucheTohmatsu Limited (“DTTL”),itsglobalnetworkofmember firmsortheirrelatedentities(collectively,the“Deloitteorganization”) is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No representations, warranties or undertakings (express or implied) are given as to the accuracy or completeness of the information in this communication, and none of DTTL, its member firms, related entities, personnel or agents shallbeliableor responsiblefor any loss or damagewhatsoever arisingdirectly or indirectlyinconnection withanypersonrelyingonthiscommunication. DTTLandeachofits member firms, andtheir relatedentities, are legally separate and independent entities. ©2020.For information,contactDeloitteToucheTohmatsu Limited. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304121 Show more Show less
Posted 1 week ago
5.0 years
1 - 9 Lacs
Bengaluru
On-site
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Cloud App and Identity Research (CAIR) team is leading the security research of Microsoft Defender for Cloud Apps. We are working on the edge technology of AI and Cloud. Researchers in the team are world class experts in cloud related threats, they are talented and enthusiastic employees. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond Responsibilities Build algorithms and innovative methods to discover and defend real world sophisticated cloud-based attacks in SaaS ecosystem. Collaborate with other data scientists to develop machine learning systems for detecting anomalies, compromises, fraud, and non-human identity cyber-attacks using both Gen AI and graph-based systems. Identify, integrate multiple data sources, or types of data, and develop expertise with multiple data sources to tell a story,identify new patterns and business opportunities, and communicate visually and verbally with clear and compelling data-driven stories. Analyze extensive datasets and develop a robust, scalable feature engineering pipeline within a PySpark-based environment. · Acquires and uses broad knowledge of innovative methods, algorithms, and tools from within Microsoft and from the scientific literature and applies his or her own analysis of scalability and applicability to the formulated problem. Work across Threat Researchers, engineering, and product teams to enable metrics for product success. Contribute to active engagement with the security ecosystem through Research papers, presentations, and blogs. Provide subject matter expertise to customers based on industry attack trends and product capabilities. Qualifications 5+ years of programming language experience like C/C++/C#/Python required and hands on experience in using technologies such as Spark, Azure ML, SQL, KQL, Databricks, etc. Able to prepare data pipelines and feature engineering pipelines to build robust models using SQL, PySpark, Azure Data Studio etc. Knowledge of Classification, Prediction, Anomaly Detection, Optimization, Graph ML, NLP · Candidate must be comfortable in manipulating and analyzing complex, high dimensional data from various sources to solve difficult problems. Knowledge of working in cloud-computing environment like Azure / AWS / Google Cloud. · Proficient in Relational Databases (SQL), Big Data Technologies (PySpark). Azure storage technologies such as ADLS, cosmos DB, etc. Generative AI experience is a plus · Bachelor's or higher degrees in Computer Science, Statistics, Mathematics, Engineering, or related disciplines. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
170.0 years
1 - 2 Lacs
Chennai
On-site
Country/Region: IN Requisition ID: 26152 Work Model: Position Type: Salary Range: Location: INDIA - CHENNAI - RNTBCI Title: Lead Data Engineer - AWS Description: Area(s) of responsibility Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Role: Lead Data Engineer -AWS Location: Bangalore /Chennai Experience: 5 – 7 Years Job Profile: Provide estimates for requirements, analyses and develop as per the requirement. Developing and maintaining data pipelines and ETL (Extract, Transform, Load) processes to extract data efficiently and reliably from various sources, transform it into a usable format, and load it into the appropriate data repositories. Creating and maintaining logical and physical data models that align with the organization's data architecture and business needs. This includes defining data schemas, tables, relationships, and indexing strategies for optimal data retrieval and analysis. Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. Collaborate with downstream application to understand their needs and build the data storage and optimize as per their need. Working closely with other stakeholders and Business to understand data requirements and translate them into technical solutions. Familiar with Agile methodologies and have prior experience working with Agile teams using Scrum/Kanban Lead Technical discussions with customers to find the best possible solutions. Proactively identify and implement opportunities to automate tasks and develop reusable frameworks. Optimizing data pipelines to improve performance and cost, while ensuring a high quality of data within the data lake. Monitoring services and jobs for cost and performance, ensuring continual operations of data pipelines, and fixing of defects. Constantly looking for opportunities to optimize data pipelines to improve performance Must Have: Hand on Expertise of 4- 5 years in AWS services like S3, Lambda, Glue, Athena, RDS, Step functions, SNS, SQS, API Gateway, Security, Access and Role permissions, Logging and monitoring Services. Good hand on knowledge on Python, Spark, Hive and Unix, AWS CLI Prior experience in working with streaming solution like Kafka . Prior experience in implementing different file storage types like Delta-lake / Ice-berg. Excellent knowledge in Data modeling and Designing ETL pipeline. Must have strong knowledge in using different databases such as MySQL, Oracle and Writing complex queries. Strong experience working in a continuous integration and Deployment process. Pyspark, AWS ,SQL, Kafka Nice to Have: Hand on experience in the Terraform, GIT, GIT Actions. CICD pipeline and Amazon Q. Terraform, GIT, GIT Actions. CICD pipeline , AI
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Gurugram
Hybrid
Exciting opportunity for an ML Platform Specialist to join a leading technology-driven firm. You will be designing, deploying, and maintaining scalable machine learning infrastructure with a strong focus on Databricks, model lifecycle, and MLOps practices. Location: Gurugram (Hybrid) Your Future Employer Our client is a leading digital transformation partner driving innovation across industries. With a strong focus on data-driven solutions and cutting-edge technologies, they are committed to fostering a collaborative and growth-focused environment. Responsibilities Designing and implementing scalable ML infrastructure on Databricks Lakehouse Building CI/CD pipelines and workflows for machine learning lifecycle Managing model monitoring, versioning, and registry using MLflow and Databricks Collaborating with cross-functional teams to optimize machine learning workflows Driving continuous improvement in MLOps and automation strategies Requirements Bachelors or Masters in Computer Science, ML, Data Engineering, or related field 3-5 years of experience in MLOps, with strong expertise in Databricks and Azure ML Proficient in Python, PySpark, MLflow, Delta Lake, and Databricks Feature Store Hands-on experience with cloud platforms (Azure/AWS/GCP), CI/CD, Git Knowledge of Terraform, Kubernetes, Azure DevOps, and distributed computing is a plus Whats in it for you Competitive compensation with performance-driven growth opportunities Work on cutting-edge MLOps infrastructure and enterprise-scale ML solutions Collaborative, diverse, and innovation-driven work culture Continuous learning, upskilling, and career development support
Posted 1 week ago
8.0 years
7 - 8 Lacs
Chennai
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Analyzes, tests and assists with the integration of new applications. Oversees the documentation of all development activity. Trains non-technical personnel. Assists with tracking performance metrics. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise. Job Description Core Responsibilities 8+ years development and data engineering experience. Proficiency in programming languages such as Python, Bash, PySpark Experience with ETL frameworks like Apache Spark, Airflow, or similar tools. Data pipeline experience with common pipeline and management tools Familiarity with data lake architectures and big data technologies (AWS Security Lake, Data Bricks, Snowflake, Hadoop) in a large, complex deployment. Strong knowledge of Data modeling, SQL and relational databases. Knowledge of data processing frameworks and data manipulation libraries Experience with cloud computing platforms (e.g. AWS)• Direct experience building systems in AWS and using devops toolchains including (Git, GitHub Actions, Jenkins, CodePipeline, Azure devops, etc.). Familiarity with Serverless services like AWS Lambda. Knowledge of microservices architecture and containerization technologies. Highly collaborative; personally, and professionally self-aware; able to and interested in interacting with employees at all levels; embody integrity; and represent and inspire the highest ethical standards. A thirst for improvement and an inclination to thoughtfully challenge the status quo. Desire to try things and iterate on them, fail fast, and focus on functionality that matters. Eagerness to learn new security tools/services to support broadening our portfolio. Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 7-10 Years
Posted 1 week ago
10.0 years
7 - 10 Lacs
Vadodara
On-site
About Rearc Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and maintaining technical excellence within our data engineering team. Your deep expertise in data architecture, ETL processes, and data modelling will be instrumental in optimizing data workflows for efficiency, scalability, and reliability. You'll collaborate closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with both technical teams and stakeholders will be essential as you drive data-driven initiatives and ensure their successful implementation. What You Bring With 10+ years of experience in data engineering, data architecture, or related fields, you offer a wealth of expertise in managing and optimizing data pipelines and architectures. Extensive experience in writing and testing Java and/or Python Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. You have a proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions. Your hands-on experience with ETL processes, data warehousing, and data modeling tools allows you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices. Your strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery). You bring strong strategic and analytical skills to the role, enabling you to solve intricate data challenges and drive data-driven decision-making. Proven proficiency in implementing and optimizing data pipelines using modern tools and frameworks, including Databricks for data processing and Delta Lake for managing large-scale data lakes. Your exceptional communication and interpersonal skills facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. What You’ll Do As a Lead Data Engineer at Rearc, your role is pivotal in driving the success of our data engineering initiatives. You will lead by example, fostering trust and accountability within your team while leveraging your technical expertise to optimize data processes and deliver exceptional data solutions. Here's what you'll be doing: Understand Requirements and Challenges : Collaborate with stakeholders to deeply understand their data requirements and challenges, enabling the development of robust data solutions tailored to the needs of our clients. Implement with a DataOps Mindset : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures. Lead Data Engineering Projects : Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery. Mentor Data Engineers : Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field. Promote Knowledge Sharing : Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation. At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.
Posted 1 week ago
8.0 years
7 - 10 Lacs
Vadodara
On-site
At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing. As a Senior Data Engineer at Rearc, you will be at the forefront of driving technical excellence within our data engineering team. Your expertise in data architecture, cloud-native solutions, and modern data processing frameworks will be essential in designing workflows that are optimized for efficiency, scalability, and reliability. You'll leverage tools like Databricks, PySpark, and Delta Lake to deliver cutting-edge data solutions that align with business objectives. Collaborating with cross-functional teams, you will design and implement scalable architectures while adhering to best practices in data management and governance . Building strong relationships with both technical teams and stakeholders will be crucial as you lead data-driven initiatives and ensure their seamless execution. What You Bring 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases. Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments. Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows. Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB. In-depth knowledge of data architecture principles and best practices, especially in cloud environments. Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders. Demonstrated ability to quickly adapt to new tasks and roles in a dynamic environment. What You'll Do Strategic Data Engineering Leadership : Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives. Architect Data Solutions : Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability. Drive Innovation : Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes. Technical Expertise : Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality. Collaboration and Mentorship : Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices. Thought Leadership : Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums. Some More About Us Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: GCP Data Engineer Location: Chennai Duration: 12 Months Work Type: Onsite Position Description: Bachelor's Degree 2+Years in GCP Services- Biq Query, Data Flow, Dataproc, DataPlex, DataFusion, Terraform, Tekton, Cloud SQL, Redis Memory, Airflow, Cloud Storage 2+ Years in Data Transfer Utilities 2+ Years in Git / any other version control tool 2+ Years in Confluent Kafka 1+ Years of Experience in API Development 2+ Years in Agile Framework 4+ years of strong experience in python, Pyspark development. 4+ years of shell scripting to develop the adhoc jobs for data importing/exporting Skills Required: Python, dataflow, Dataproc, GCP Cloud Run, DataForm, Agile Software Development, Big Query, TERRAFORM, Data Fusion, Cloud SQL, GCP, KAFKA Skills Preferred: Java Experience Required: 8+ years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Required Skills and Experience: 3+ years of experience building enterprise-scale, cloud-native applications in AWS . Strong hands-on experience with AWS Glue , PySpark , Lambda , S3 , Athena , and RDS (Aurora/Postgres) . Proven experience in building secure REST APIs , including token-based security, API Gateway configuration, and monitoring. Experience working in event-driven and domain-driven architectures . Deep knowledge of SQL Server and experience translating complex stored procedures to AWS Glue. Expertise in designing modular, fault-tolerant Glue workflows using Step Functions . Proficient in Python and developing reusable code libraries for data processing tasks. Strong understanding of AWS security services (KMS, tokenization, IAM policies). Experience with CloudWatch , monitoring , alerting , and log tracing . Hands-on experience with Terraform for managing infrastructure as code. Experience with API documentation and lifecycle management. Location- Chennai Yrs of exp- 4 to 10Yrs NP-imm to 15Days Work mode- Hyd, 4days wfo in a week Shift- 1PM to 10PM shift Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who We Are Bain & Company is a global management consulting that helps the world’s most ambitious change-makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. Who You’ll work with BCN Labs is a Center of Excellence (CoE) functioning akin to a small R&D boutique startup within the Bain ecosystem, delivering end-to-end data driven client deployable solutions across a wide variety of sectors and industries. We work directly with other CoEs and Practices within Bain as part of the Expert Client Delivery system and interface with teams across the globe. We are first and foremost business thought partners working on intelligent ways of using analytical techniques and algorithms across the spectrum of disciplines that can enable building world-class solutions. Our goal is to build a disruptive high-impact business-enabled end-to-end analytical solutions delivery system across all verticals of Bain. What You Will Do We’re seeking a Project Leader, who is a self-starter , and brings a unique mix of data engineering expertise and analytical problem-solving ability to play a key role in the delivery of cutting-edge analytical solutions at BCN Labs. This role sits at the intersection of robust data platform engineering, software development and client-oriented delivery, requiring both hands-on implementation skills and the knack for strategic thinking to solve real world business problems. As a PL, you will drive the end-to-end data pipeline lifecycle – from designing robust architectures to deploying production-grade analytical solutions. You’ll also work closely with analysts, data scientists and business stakeholders to frame problems, validate solutions, and lead teams in client delivery. A PL will be responsible to: Architect and Deliver Scalable Data Pipelines : Build, optimize, and maintain batch and streaming pipelines using modern data engineering tools and frameworks (e.g., PySpark, Airflow, Snowflake etc.). End-to-End Project Leadership: Own the full delivery cycle—from data ingestion and transformation to application deployment and monitoring in the cloud. Analytical Framing : Work with project teams to understand business needs and help shape technical solutions that are analytically sound and measurable in terms of business value. Mentorship and Team Leadership : Lead a team of engineers and analysts, providing technical guidance, code reviews, and project oversight to ensure quality and impact. Help build the next layer of people with full-stack capabilities at BCN Labs. Hands-on Development : Write and review clean, modular, production-ready code. Ensure scalability, reusability, and maintainability of solutions. Client & Stakeholder Engagement : Communicate complex technical concepts and insights clearly and persuasively to non-technical audiences, both internally and externally. Data Infrastructure Innovation: Contribute to internal tooling, frameworks, and automation efforts to accelerate the Labs’ data engineering capabilities. Collaborate on Analytical Solutions : Work with data scientists by enabling high-performance, well-governed data environments and workflows. About You Education & Experience: Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field. 5+ years (Masters + 3+ years) of proven experience in data engineering, software development, and building scalable data pipelines in a production environment. Demonstrated expertise in driving end-to-end analytical solution delivery — from data ingestion and transformation to cloud deployment and performance optimization. You will fit into our team-oriented structure with a college/hostel-style way of working, having the comfort of reaching out to anyone for support that can enable our clients better Core Technical Skills: Expertise in Python with solid experience in writing efficient, maintainable, and testable code for data pipelines and services. Strong skills in SQL (and NoSQL DB) for data transformation, analysis, and performance tuning. Proficiency with HTML, CSS, JavaScript, AJAX to build data-driven UIs or Web Apps. Experience in developing, integrating and consuming RESTful APIs and working with microservices architecture. Frameworks & Platforms: Hands-on experience with Python-based frameworks like FastAPI, Django and/or Streamlit for building data apps or APIs. Solid frontend development skills, with experience using modern JavaScript frameworks such as React and/or Vue.js to build interactive, data-driven UIs and Web Apps. Familiarity with Docker, Git, CI/CD pipelines, and modern software delivery practices. Experience deploying and managing data solutions on AWS or Azure (e.g., Lambda, EC2, S3, Data Factory). Strong preference for candidates with real-world experience in Apache Airflow, PySpark, and Snowflake. Knowledge of container orchestration (Eg: Kubernetes) is a plus What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh
On-site
We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Optimize performance of the built Spark applications in Hadoop using configurations around Spark Context, Spark-SQL, Data Frame, and Pair RDD's. Optimize performance for data access requirements by choosing the appropriate native Hadoop file formats (Avro, Parquet, ORC etc) and compression codec respectively. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Build data tokenization libraries and integrate with Hive & Spark for column-level obfuscation. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the following: HIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description Location: Mumbai/ Bengaluru Experience: 3-5 years Industry: Banking / Financial Services (Mandatory) Why would you like to join us? TransOrg Analytics specializes in Data Science, Data Engineering and Generative AI, providing advanced analytics solutions to industry leaders and Fortune 500 companies across India, US, APAC and the Middle East. We leverage data science to streamline, optimize, and accelerate our clients' businesses. Visit at www.transorg.com to know more about us. What do we expect from you? Build and validate credit risk models , including application scorecards and behavior scorecards (B-score), aligned with business and regulatory requirements. Use advanced machine learning algorithms such as Logistic Regression, XGBoost , and Clustering to develop interpretable and high-performance models. Translate business problems into data-driven solutions using robust statistical and analytical methods. Collaborate with cross-functional teams including credit policy, risk strategy, and data engineering to ensure effective model implementation and monitoring. Maintain clear, audit-ready documentation for all models and comply with internal model governance standards. Track and monitor model performance, proactively suggesting recalibrations or enhancements as needed. What do you need to excel at? Writing efficient and scalable code in Python, SQL, and PySpark for data processing, feature engineering, and model training. Working with large-scale structured and unstructured data in a fast-paced, banking or fintech environment. Deploying and managing models using MLFlow, with a strong understanding of version control and model lifecycle management. Understanding retail banking products , especially credit card portfolios , customer behavior, and risk segmentation. Communicating complex technical outcomes clearly to non-technical stakeholders and senior management. Applying a structured problem-solving approach and delivering insights that drive business value. What are we looking for? Bachelors or masters degree in Statistics, Mathematics, Computer Science , or a related quantitative field. 3-5 years of experience in credit risk modelling , preferably in retail banking or credit cards. Hands-on expertise in Python, SQL, PySpark , and experience with MLFlow or equivalent MLOps tools. Deep understanding of machine learning techniques including Logistic Regression, XGBoost, and Clustering. Proven experience in developing Application Scorecards and behavior Scorecards using real-world banking data. Strong documentation and compliance orientation, with an ability to work within regulatory frameworks. Curiosity, accountability, and a passion for solving real-world problems using data. Cloud Knowledge, JIRA, GitHub(good to have) Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.
Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect
In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)
Here are 25 interview questions you may encounter when applying for PySpark roles:
As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2