Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We’re looking for a Senior Engineering Manager who expects more from their career . It’s a chance to extend and improve dunnhumby’s Software Engineering Department. It’s an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers. As a Senior Engineering Manager, you will be responsible for leading and inspiring multiple engineering teams to deliver high-quality, innovative software products that drive business growth. You will set the technical direction, build high-performing teams, and foster a culture of engineering excellence. Required Skills 15+ years of experience in software engineering, with at least 3+ years leading global teams. Proven experience as a Engineering Manager(senior), Lead Engineer, or Tech manager, managing complex engineering projects. Strong expertise in distributed systems, cloud architecture (GCP & Azure), microservices, API design, and scalable platform engineering. In-depth knowledge and hands-on experience with .NET, Python, Spark, Git (GitLab), Docker, Kubernetes and Cloud development (GCP & Azure). Experience working with Javascript(React/Angular Etc) Strong knowledge of DevOps, CI/CD pipelines, observability, and cloud security best practices. Ability to drive engineering strategy, process improvements, and high-velocity agile execution. Experience hiring, mentoring, and leading global teams across multiple time zones. Excellent stakeholder management, communication, and decision-making skills, working cross-functionally with PMs, UX, and Business Leaders. Passion for continuous learning, innovation, and staying ahead of technology trends. What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here) Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are seeking a highly skilled, hands-on AI Architect - GenAI to lead the design and implementation of production-grade, cloud-native AI and NLP solutions that drive business value and enhance decision-making processes. The ideal candidate will have a robust background in machine learning, generative AI, and the architecture of scalable production systems. As an AI Architect, you will play a key role in shaping the direction of advanced AI technologies and leading teams in the development of cutting-edge solutions. What you will do: Architect and design AI and NLP solutions to address complex business challenges and support strategic decision-making. Lead the design and development of scalable machine learning models and applications using Python, Spark, NoSQL databases, and other advanced technologies. Spearhead the integration of Generative AI techniques in production systems to deliver innovative solutions such as chatbots, automated document generation, and workflow optimization. Guide teams in conducting comprehensive data analysis and exploration to extract actionable insights from large datasets, ensuring these findings are communicated effectively to stakeholders. Collaborate with cross-functional teams, including software engineers and data engineers, to integrate AI models into production environments, ensuring scalability, reliability, and performance. Stay at the forefront of advancements in AI, NLP, and Generative AI, incorporating emerging methodologies into existing models and developing new algorithms to solve complex challenges. Provide thought leadership on best practices for AI model architecture, deployment, and continuous optimization. Ensure that AI solutions are built with scalability, reliability, and compliance in mind. The skills you bring: Minimum of 10+ years of experience in AI, machine learning, or a similar role, with a proven track record of delivering AI-driven solutions. Hands-on experience in designing and implementing end-to-end GenAI-based solutions, particularly in chatbots, document generation, workflow automation, and other generative use cases. Expertise in Python programming and extensive experience with AI frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, and vector databases. Deep understanding and experience with distributed data processing using Spark. Proven experience in architecting, deploying, and optimizing machine learning models in production environments at scale. Expertise in working with open-source Generative AI models (e.g., GPT-4, Mistral, Code-Llama, StarCoder) and applying them to real-world use cases. Expertise in designing cloud-native architectures and microservices for AI/ML applications. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 763161 Show more Show less
Posted 2 days ago
6.0 - 11.0 years
15 - 20 Lacs
Chennai, Bengaluru
Hybrid
Total Exp-6+Yrs 3+ years of experience in data engineering , preferably with real-time systems. Proficient with Python, SQL, and distributed data systems (Kinesis, Spark, Flink, etc.). Strong understanding of event-driven architectures , data lakes , and message serialization . Experience with sensor data processing , telemetry ingestion , or mobility data is a plus. Familiarity with Docker , CI/CD , Kubernetes , and cloud-native architectures. Familiarity with building data pipelines & its workflows (eg: Airflow).
Posted 2 days ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Amazon has built a global reputation for being the most customer-centric company, a company that customers from all over the world recognize, value, and trust for both our products and services. Amazon has a fast-paced environment where we “Work Hard, Have Fun and Make History.” As an increasing number of enterprises move their critical systems to the cloud, AWS India is in need of highly efficient technical consulting talent to help our largest and strategically important customers navigate the operational challenges and complexities of AWS Cloud. We are looking for Technical Consultants to support our customers creative and transformative spirit of innovation across all technologies, including Compute, Storage, Database, Data Analytics, Application services, Networking, Server-less and more. This is not a sales role, but rather an opportunity to be the principal technical advisor for organizations ranging from start-ups to large enterprises. As a Technical Account Manager, you will be the primary technical point of contact for one or more customers helping to plan, debug, and oversee ongoing operations of business-critical applications. You will get your hands dirty, troubleshooting application, network, database, and architectural challenges using a suite of internal AWS Cloud tools as well as your existing knowledge and toolkits. We are seeking individuals with strong backgrounds in I.T. Consulting and in any of these related areas such as Solution Designing, Application and System Development, Database Management, Big Data and Analytics, DevOps Consulting, and Media technologies. Knowledge of programming and scripting is beneficial to the role. Key job responsibilities Every day will bring new and exciting challenges on the job while you: Learn and use Cloud technologies. Interact with leading technologists around the world. Work on critical, highly complex customer problems that may span multiple AWS Cloud services. Apply advanced troubleshooting techniques to provide unique solutions to our customers' individual needs. Work directly with AWS Cloud subject matter experts to help reproduce and resolve customer issues. Write tutorials, how-to videos, and other technical articles for the customer community. Leverage your extensive customer support experience and provide feedback to internal AISPL teams on how to improve our services. Drive projects that improve support-related processes and our customers’ technical support experience. Assist in Design/Architecture of AWS and Hybrid cloud solutions. Help Enterprises define IT and business processes that work well with cloud deployments. Be available outside of business hours to help coordinate the handling of urgent issues as needed. A day in the life A TAM's daily activities involve managing complex technical and critical service events while serving as the principal technical advisor for enterprise customers. They spend their time partnering with customers to optimize AWS usage, tracking operational issues, managing feature requests and launches, while also working directly with internal AWS teams to exceed customer expectations. As a trusted advisor, they provide strategic technical guidance to help plan and build solutions using best practices, while keeping their customers' AWS environments operationally healthy. About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Bachelor’s Degree in Computer Science, IT, Math, or related discipline required, or equivalent work experience. 10+ years of hands-on Infrastructure / Troubleshooting / Systems Administration / Networking / DevOps / Applications Development experience in a distributed systems environment. External enterprise customer-facing experience as a technical lead, with strong oral and written communication skills, presenting to both large and small audiences. Be mobile and travel to client locations as needed. Preferred Qualifications Experience in a 24x7 operational services or support environment Advanced experience in one or more of the following areas: Software Design or Development, Content Distribution/CDN, Scripting/Automation, Database Architecture, Cloud Architecture, Cloud Migrations, IP Networking, IT Security, Big Data/Hadoop/Spark, Operations Management, Service Oriented Architecture etc. Experience with AWS Cloud services and/or other Cloud offerings. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS India - Delhi Job ID: A2989844 Show more Show less
Posted 2 days ago
12.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Join Adobe in the heart of Bangalore, where brand-new engineering meets outstanding innovation. As a Software Development Engineer, you will play a pivotal role in crafting the future of digital experiences. This is an outstanding opportunity to develop groundbreaking systems and services as part of a multifaceted and ambitious team made up of machine learning engineers, data engineers and front end engineers. Your work will be instrumental in delivering powerful technology that empowers users globally. You will be an experienced backend engineer for the Ai/ML, Data Platform, Search and Recommendations teams of the Adobe Learning Manager. What You’ll Do Build Java based services to power API for search, recommendations, Ai Assistants, reporting and analytics. Build backend systems such as indexing pipelines for search and vector datastores. Build data pipelines such as horizontally scalable data pipelines Provide technical leadership for the design and architecture of systems which are a blend of data, ML and services stacks. Work closely with Machine Learning Scientists, Data Engineers, UX Designers and Product Managers to develop solutions across search, recommendation, Ai Assistants and Data Engineering. Integrate Natural Language Processing (NLP) capabilities into the stack. Do analysis and present key findings, insights and concepts to key influencers and leaders and contribute to building the product roadmap. Deliver highly reliable services with great quality and operational excellence. What you need to succeed A Bachelor's degree in Computer Science or relevant streams. 12 to 15 years of relevant experience. At least 5 years of hands-on experience building micro-services and REST API using Java. At least 5 years of hands-on experience building data pipelines using Big Data technologies such as Hadoop, Spark or Storm. Strong Hands-on experience with RDBMS & NoSQL databases. Strong grasp of fundamentals on web services and distributed computing. Strong background in data engineering and hands-on experience with big data technologies. Strong analytical and problem-solving skills. Hands-on experience with Python, Elastic Search, Spark and Kafka would be a plus. Hands-on experience rolling out Ai and ML Based solutions would be a plus. Enthusiastic about technological trends and eager to innovate. Ability to quickly ramp up on new technologies Proven track record of Engineering generalist resourcefulness. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us InMobi is the leading provider of content, monetization, and marketing technologies that fuel growth for industries around the world. Our end-to-end advertising software platform, connected content, and commerce experiences activate audiences, drive real connections, and diversify revenue for businesses everywhere. InMobi Advertising is an end-to-end advertising platform that helps advertisers drive real connections with consumers. We drive customer growth by helping businesses understand, engage, and acquire consumers effectively through data-driven media solutions. Learn more at advertising.inmobi.com. Glance is a consumer technology company that operates disruptive digital platforms, including Glance, Roposo, and Nostra. Glance’s smart lockscreen and TV experience inspires consumers to make the most of every moment by surfing relevant content without the need for searching and downloading apps. Glance is currently available on over 450 million smartphones and televisions worldwide. Learn more at glance.com. Born in India, InMobi maintains a large presence in Bangalore and San Mateo, CA, and has operations in New York, Singapore, Delhi, Mumbai, Beijing, Shanghai, Jakarta, Manila, Kuala Lumpur, Sydney, Melbourne, Seoul, Tokyo, London, and Dubai. To learn more, visit inmobi.com. What is the team like? InMobi Exchange is one of the world's leading advertising platforms handling ~2M ad requests per sec, and serving both publisher and advertiser needs end to end. The Ad-Serving team is responsible for building cutting edge ad-machine which finds and serves the best fitting ad to the end user. As a core member, your code and systems will directly impact revenue daily. What do we expect from you? Experience: 2-5 years development experience. Education: B.E. / B.Tech in Computer Science or equivalent Strong development, coding experience in one or more programming languages like OO Programming (Java), Scala, Spark, Python. Expertise in Data Structures, Algorithms, Concurrency. Experience in Micro-services Architecture, multi-threading, performance-oriented programming and designing skills Good organization, communication and interpersonal skills Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace Must be a self-starter who can work well with minimal guidance and in fluid environment Provide good attention to details Must be excited by challenges surrounding the development of highly scalable & distributed system for building audience targeting capabilities Agility and ability to adapt quickly to changing requirements and scope and priorities. Nice To Have Skills Experience of online advertising domain Experience of working on massively large scale data systems in production environments Experience in leveraging user data for behavioral targeting and ad-relevance Experience of Big Data analytics domain Experience of building products that are powered by data and insights Experience on hosting and deploying application on public cloud like Msft Azure, GCP, AWS. The InMobi Culture At InMobi, culture isn’t a buzzword; it's an ethos woven by every InMobian, reflecting our diverse backgrounds and experiences. We thrive on challenges and seize every opportunity for growth. Our core values of thinking big, being passionate, showing accountability, and taking ownership with freedom – guide us in every decision we make. We believe in nurturing and investing in your development through continuous learning and career progression with our InMobi Live Your Potential program. InMobi is proud to be an Equal Employment Opportunity and we make reasonable accommodations for qualified individuals with disabilities. Visit https://www.inmobi.com/company/careers to better understand our benefits, values, and more! Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
India
On-site
About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Position Overview We are growing our world-class team of mission-driven, entrepreneurial Data Scientists who are passionate about broadening financial inclusion by untapping insights from non-traditional data. Be part of the team responsible for developing and enhancing Oportun’s core intellectual property used in scoring risk for underbanked consumers that lack a traditional credit bureau score. In this role you will be on the cutting edge working with large and diverse (i.e. data from dozens of sources including transactional, mobile, utility, and other financial services) alternative data sets and utilize machine learning and statistical modeling to build scores and strategies for managing risk, collection/loss mitigation, and fraud. You will also drive growth and optimize marketing spend across channels by leveraging alternative data to help predict which consumers would likely be interested in Oportun’s affordable, credit building loan product. Responsibilities Develop data products and machine learning models used in Risk, Fraud, Collections, and portfolio management, and provide frictionless customer experience for various products and services Oportun provides. Build accurate and automated monitoring tools which can help us to keep a close eye on the performance of the models and rules. Build model deployment platform which can shorten the time of implementing new models. Build end-to-end reusable pipelines from data acquisition to model output delivery. Lead initiatives to drive business value from start to finish including project planning, communication, and stakeholder management. Lead discussions with Compliance, Bank Partners, and Model Risk Management teams to facilitate the Model Governance Activities such as Model Validations and Monitoring. Qualifications A relentless problem solver and out of the box thinker with a proven track record of driving business results in a timely manner Master’s degree or PhD in Statistics, Mathematics, Computer Science, Engineering or Economics or other quantitative discipline (Bachelor’s degree with significant relevant experience will be considered). Hands on experience leveraging machine learning techniques such as Gradient Boosting, Logistic Regression and Neural Network to solve real world problems 3+ years of hands-on experience with data extraction, cleaning, analysis and building reusable data pipelines; Proficient in SQL, Spark SQL and/or Hive 3+ years of experience in leveraging modern machine learning toolset and programming languages such as Python Excellent written and oral communication skills Strong stakeholder management and project management skills Comfortable in a high-growth, fast-paced, agile environment Experience working with AWS EMR, Sage-maker or other cloud-based platforms is a plus Experience with HDFS, Hive, Shell script and other big data tools is a plus Show more Show less
Posted 2 days ago
50.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Gap Inc. Our past is full of iconic moments — but our future is going to spark many more. Our brands — Gap, Banana Republic, Old Navy and Athleta — have dressed people from all walks of life and all kinds of families, all over the world, for every occasion for more than 50 years. But we’re more than the clothes that we make. We know that business can and should be a force for good, and it’s why we work hard to make product that makes people feel good, inside and out. It’s why we’re committed to giving back to the communities where we live and work. If you're one of the super-talented who thrive on change, aren't afraid to take risks and love to make a difference, come grow with us. About The Role The Manager of Supplier Management will lead the supplier relationship management function within the Accounts Payable (AP) team. This role is responsible for overseeing and managing the company's supplier base, ensuring timely and accurate vendor information, resolving supplier issues, and optimizing supplier payment processes. The ideal candidate will have a deep understanding of supplier management, AP processes, and strong leadership abilities. What You'll Do Supplier Relationship Management: Develop and maintain strong relationships with key suppliers, ensuring open and effective communication. Address and resolve supplier issues or disputes regarding invoicing, payments, and terms in a timely and professional manner. Work closely with suppliers to understand their needs and improve the overall supplier experience. Supplier Onboarding & Information Management: Lead the supplier onboarding process, ensuring that all relevant supplier information is gathered, verified, and entered into the system accurately. Regularly audit and update supplier information to ensure accuracy and compliance. Collaborate with procurement and legal teams to ensure all contracts and supplier agreements are aligned with company policies. Accounts Payable Collaboration: Collaborate with the AP team to ensure seamless processing of supplier invoices and payments, optimizing cash flow and vendor satisfaction. Oversee the resolution of any discrepancies between suppliers and internal teams (e.g., procurement, finance) to ensure timely payment. Work closely with AP teams to address supplier inquiries, track payment status, and resolve issues related to invoice processing and payment cycles. Process Improvement & Efficiency: Continuously assess and improve supplier management and AP processes to enhance efficiency, reduce errors, and increase automation. Implement and maintain best practices for managing supplier relationships, including effective communication, issue resolution, and performance metrics. Identify opportunities for process optimization within the AP team to support a faster, more efficient payment cycle. Supplier Performance Monitoring: Develop and implement metrics and KPIs to measure supplier performance, ensuring timely deliveries, adherence to terms, and quality standards. Track and report on supplier performance, escalating issues when necessary and working with vendors to improve outcomes. Reporting & Analysis: Generate regular reports on supplier activity, payment cycles, aging analysis, and discrepancies for senior leadership. Provide data-driven insights and recommendations to improve supplier management and accounts payable processes. Compliance & Risk Management: Ensure all supplier management activities comply with internal controls, accounting standards, and regulatory requirements. Identify potential risks in supplier relationships and take proactive steps to mitigate them. Collaboration with Cross-Functional Teams: Partner with procurement, legal, and treasury teams to ensure that supplier terms, contracts, and relationships align with corporate goals. Support cross-functional projects that require supplier coordination, such as system upgrades or new process implementation. Who You Are Bachelor’s degree in Business, Finance, Accounting, or related field. 7+ years of experience in supplier management, accounts payable, or procurement, with at least 3 years in a managerial or leadership role. Strong knowledge of supplier relationship management, procurement processes, and accounts payable operations. Experience with ERP systems (e.g., SAP, Oracle, or similar), supplier management software, and advanced Excel skills. Excellent communication, negotiation, and interpersonal skills, with the ability to manage multiple stakeholder relationships effectively. Strong analytical skills and the ability to assess and improve processes. Demonstrated ability to manage a team, mentor and develop talent, and build cross-functional relationships. Knowledge of compliance regulations, internal controls, and audit processes. High attention to detail and the ability to work under pressure to meet deadlines in a fast-paced environment. Benefits at Gap Inc. One of the most competitive paid time off plans in the industry Comprehensive health coverage for employees, same-sex partners and their families Health and wellness program: free annual health check-ups, fitness center and Employee Assistance Program Comprehensive benefits to support the journey of parenthood Retirement planning assistance See more of the benefits we offer. Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Show more Show less
Posted 2 days ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Dear Candidates, We are conducting a Walk-In Interview in Hyderabad for the position of Data Engineering on 20th/21st/22nd June 2025 . Position: Data Engineering Job description: Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift) Handling of data related activities such as data parsing, cleansing quality definition data pipelines, storage and ETL scripts Experiences in programming language Python/Pyspark/SQL Experience with data migration with hands-on experience Experiences in consuming rest API using various authentication options with in AWS Lambda architecture orchestrate triggers, debug and schedule batch job using a AWS Glue, Lambda and step functions understanding of AWS security features such as IAM roles and policies Exposure to Devops tools AWS certification in AWS is highly preferred Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Date: 20th June 2025 to 22nd June 2025 Time : 9.00 AM to 6.00 PM Eligibility: Any Graduate Experience : 2- 10 Years Gender: ANY Interested candidates can walk in directly. For any queries, please contact us at +91 7349369478/ 8555079906 Interview Venue Details: Selectify Analytics Address: Capital Park (Jain Sadguru Capital Park) Ayyappa Society, Silicon Valley, Madhapur, Hyderabad, Telangana 500081 Contact Person: Mr. Deepak/Saqeeb/Ravi Kumar Interview Time: 9.00 AM to 6.00 PM Contact Number : +91 7349369478/ 8555079906
Posted 2 days ago
12.0 years
0 Lacs
Satara, Maharashtra, India
On-site
Join us as a Sourcing Manager in Satara, Maharashtra to be responsible for managing and developing the local supplier base to support the factory’s’ strategic needs. The role ensures cost-effective, timely, and high-quality supply of materials and services while aligning with regional, product group, and global sourcing strategies. About The Job At Alfa Laval, we always go that extra mile to overcome the toughest challenges. Our driving force is to accelerate success for our customers, people and planet. You can only achieve that by having dedicated people with a curious mind. Curiosity is the spark behind great ideas and great ideas drive progress. As a member of our team, you thrive in a truly diverse and inclusive workplace based on care and empowerment. You are here to make a difference. Constantly building bridges to the future with sustainable solutions that have an impact on our planet’s most urgent problems. Making the world a better place every day. About The Position This position is located in Satara, will report to the Factory and Site Manager Satara. In this role, the Sourcing Manager’s focus is to strengthen and further develop the existing supplier base in line with future capacity, quality, sustainability, and innovation needs. This position will manage the sourcing for GPHE, LA and WHE departments. As a part of the team, You Will: Responsible for Supplier Development & Management (existing supplier base!) Drive continuous development of existing local suppliers to improve performance, competitiveness, and capability. Identify and implement opportunities for localization of materials or components in alignment with cost and lead-time reduction goals in line with product groups, and global sourcing strategies. Conduct regular supplier reviews and audits to ensure compliance with quality, safety, sustainability, and contractual requirements. Collaboration and Alignment: Act as the primary interface between the local factory and regional, product groups, and global sourcing teams. Ensure local sourcing activities align with global category strategies and product groups roadmaps. Participate in cross-functional sourcing and development projects, contributing local market insights and supplier capabilities. Within the Product Groups, control, encourage, drive and push improvement for purchased material and suppliers, (Local and Global) Accountable for the Product Groups handshake process to secure a pipeline of purchasing initiatives, right prioritization and follow up of the executions. Drive supply optimization for AL from Product Groups perspective Chair weekly product Group purchasing Improvement meetings (Pre-PIM meetings) and secure escalation of deviations to Global Purchasing (PIM) acording to process Accountable for the Product Groups requirements during the execution of the purchasing projects (Global and Local) Actively contribute to the sourcing strategy and commodity strategy to strive for alignment with the Product Groups. Give input to the Operational plans from sourcing perspective Communicate significant changes of forecast to Global Purchasing. Strategic Sourcing & Cost Management: Lead local sourcing initiatives and support regional/global negotiations by providing data, supplier insights, and local market intelligence. Support cost-reduction programs, make-buy analyses, and dual-sourcing strategies. Monitor and manage local supplier risks and implement mitigation strategies where needed. Operational Procurement Support: Collaborate with planning, quality, engineering, and logistics to resolve supplier performance issues. Ensure timely delivery of goods and services by coordinating closely with internal stakeholders and suppliers. Full understanding of sourcing strategy Full understanding of the supply chain needs and targets within a Product Group Full understanding of the product within the product group Good understanding of supplier and material market situation (material price, competition, risks) Good understanding of the Purchasing Process and commercial deals Full understanding of Material Management Preferably trained in Green Belt and Supply Development What You Know Bachelor’s degree in mechanical or production engineering and supply chain or business administration or related field. Total 12+ years’ experience with minimum 5–7 years of experience in sourcing or procurement, ideally in a manufacturing or industrial setting. Proven experience in supplier development and cross-functional collaboration. Strong negotiation, communication, and analytical skills. Ability to navigate complex stakeholder networks (local, regional, global). Fluent in English; Proactive, results-driven, and hands-on approach. Strong interpersonal and intercultural communication skills. Able to work independently while ensuring alignment with broader sourcing teams. High integrity and commitment to compliance and sustainability standards. Key Relationships Product Groups Sourcing Managers and Sourcing organisation within Product Groups Local Supply Chain Managers Global Sourcing and Commodity Managers (Global Purchasing organisation) Regional Sourcing Manager Factory Managers Physical & Environmental Factors Office environment with frequent attendance on the shop floor. Safety equipment required when present on the shop floor – footwear, hearing, eyewear. Environmental Factors (hazardous materials, work location, work surfaces, exposure). Why Should You Apply We offer you an interesting and challenging position in an open and friendly environment where we help each other to develop and create value for our customers. Exciting place to build a global network with different nationalities. Your work will have a true impact on Alfa Laval’s future success, you will be learning new things every day. "We care about diversity, inclusion and equity in our recruitment processes. We also believe behavioural traits can provide important insights into a candidate's fit to a role. To help us achieve this we apply Pymetrics assessments, and upon application you will be invited to play the assessment games.” Show more Show less
Posted 2 days ago
4.0 - 7.0 years
7 - 14 Lacs
Pune, Mumbai (All Areas)
Work from Office
Job Profile Description Create and maintain highly scalable data pipelines across Azure Data Lake Storage, and Azure Synapse using Data Factory, Databricks and Apache Spark/Scala Responsible for managing a growing cloud-based data ecosystem and reliability of our Corporate datalake and analytics data mart Contribute to the continued evolution of Corporate Analytics Platform and Integrated data model. Be part of Data Engineering team in all phases of work including analysis, design and architecture to develop and implement cutting-edge solutions. Negotiate and influence changes outside of the team that continuously shape and improve the Data strategy 4+ years of experience implementing analytics data Solutions leveraging Azure Data Factory, Databricks, Logic Apps, ML Studio, Datalake and Synapse Working experience with Scala, Python or R Bachelors degree or equivalent experience in Computer Science, Information Systems, or related disciplines.
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less
Posted 2 days ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description We are looking for a Senior Social Media Executive who will be responsible for managing, strategizing, and optimizing our social media presence across various platforms. The ideal candidate should have hands-on experience in content creation, community engagement, performance analysis, and campaign management to drive brand awareness and engagement. Key Responsibilities: Social Media Strategy & Execution: Develop and execute social media strategies to enhance brand visibility and engagement. Manage and optimize social media calendars, ensuring timely and engaging content. Content Creation & Management: Create, curate, and manage high-quality content (text, images, videos, and reels) tailored for each platform. Collaborate with designers, copywriters, and video editors to produce engaging social media content. Community Engagement: Monitor and respond to audience comments, messages, and reviews to maintain a strong brand presence. Engage with influencers, industry professionals, and relevant communities to enhance brand positioning. Performance Tracking & Analytics: Monitor key metrics (engagement, reach, impressions, follower growth, etc.) using tools like Meta Business Suite, Google Analytics, and other social media analytics platforms. Provide insights and recommendations for content and campaign optimization based on data analysis. Paid Social Media Campaigns: Assist in strategizing and managing paid ad campaigns on Meta (Facebook & Instagram), LinkedIn, YouTube, and other platforms. Coordinate with the performance marketing team to track campaign performance and suggest improvements. Trend Monitoring & Innovation: Stay updated with the latest social media trends, platform updates, and industry best practices. Experiment with new content formats and features (Reels, Stories, Lives, Polls, etc.) to drive engagement. Requirements & Qualifications: Minimum 1 year of hands-on experience in social media management and execution. Strong understanding of platforms like Facebook, Instagram, LinkedIn, Twitter, YouTube, and emerging channels. Proficiency in social media tools like Hootsuite, Buffer, Canva, Later, and Meta Business Suite. Basic knowledge of social media ads and paid campaigns. Excellent written and verbal communication skills. Creative mindset with a keen eye for design and aesthetics. Ability to multitask, work under tight deadlines, and adapt to evolving trends. Preferred Qualifications: Experience in handling social media for brands in real estate, fashion, lifestyle, or B2B sectors is a plus. Knowledge of SEO for social media content. Basic video editing and graphic design skills (using Canva, Adobe Spark, or Photoshop). Perks & Benefits: Opportunity to work with a dynamic and creative team. Growth opportunities within the organization. Exposure to various industries and projects. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹25,000.00 per month Drop your resume on hr@osumare.in / whatsapp your resume on 9604153943 Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
Remote
IB English Faculty (DP – Grades 9 to 12) 📍 Location: Gurgaon (1st month onsite) → then Work From Home 💰 Salary: ₹7–8 LPA 📅 Work Days: 6 days/week 🕐 Experience: 1–2 years 🎓 Education: Must have BA & MA in English (Honours only) Not Another English Class. A Sparkl-ing Experience. Do you love teaching literature that makes teenagers think , not just memorize? Do you dream of taking students from Shakespeare to Arundhati Roy with purpose and passion? If yes, Sparkl is looking for you! We’re hiring an IB English Faculty for DP (Grades 9–12) — someone who brings strong academic grounding, school-teaching experience, and that extra spark that makes stories come alive. Who We’re Looking For: ✅ You must have taught English Literature in a formal school or tuition center (CBSE, ICSE, Cambridge, or IB preferred). ✅ You’ve handled school curriculum (not vocational/entrance prep like SAT, TOEFL, SSC, CAT, etc). ✅ You have a Bachelor’s + Master’s degree in English Honours — no exceptions. ✅ You know how to explain literary devices, build essay-writing skills, and get teens talking about theme, tone, and character arcs. ✅ You’re confident, clear, and love working with high-schoolers. What You'll Be Doing: 📚 Teach IB DP English for Grades 9–12 (focus on Literature, writing, comprehension). 📝 Guide students through critical analysis, essay structuring, and academic writing. 📖 Bring texts alive — from Shakespeare to modern prose — in ways students will remember. 🏢 Begin with 1 month of in-person training at our Gurgaon office, then shift to remote work. Why Join Sparkl? ✨ Work with top mentors in the IB space ✨ Teach smart, curious, high-performing students ✨ Young, passionate team and a flexible work environment ✨ Real impact — real growth Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
Remote
IB Physics Faculty (MYP + DP) 📍 Location: Gurgaon (1st month onsite) → then Work From Home 💰 Salary: ₹7–8 LPA 🕒 6 days/week | Immediate Joiners Preferred Physics = Fun. Who Knew? (You Did.) If you can turn Newton’s laws into a Netflix-worthy explanation, and you genuinely love helping teens get “the point” of Physics — then we want you at Sparkl . We’re looking for a young IB Physics Educator to teach both MYP & DP , someone who can go from talking atoms to astrophysics — and make it fun. The Role Includes: 🔬 Teaching IB Physics to students in Grades 6–12 (MYP & DP) 🧲 Creating energy in the virtual classroom — minus the resistance 🧪 Using experiments, analogies, and storytelling to explain tough concepts 🏢 Starting your journey with 1 month of training in Gurgaon, then fully remote You Should Be Someone Who: ✅ Has 1–2 years of teaching or tutoring experience (IB/IGCSE a plus) ✅ Holds a graduate/postgraduate degree in Physics ✅ Communicates clearly, creatively, and confidently in English ✅ Cares deeply about student learning (not just the syllabus) Why Work With Sparkl? ⚡ Young and fun team, serious about learning 🌎 Teach ambitious, globally-minded students 🧠 Mentorship and training that actually helps you grow 🏡 Work-from-home flexibility after initial onboarding 🌟 Don’t just teach Physics — spark a love for it. Apply today! Show more Show less
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description Job Title – PROJECT CO-ORDINATOR __________________________________ About JLL: We’re JLL—a leading professional services and investment management firm specializing in real estate. We have operations in over 80 countries and a workforce of over 91,000 individuals around the world who help real estate owners, occupiers and investors achieve their business ambitions. As a global Fortune 500 company, we also have an inherent responsibility to drive sustainability and corporate social responsibility. That’s why we’re committed to our purpose to shape the future of real estate for a better world. We’re using the most advanced technology to create rewarding opportunities, amazing spaces and sustainable real estate solutions for our clients, our people and our communities. Our core values of teamwork, ethics and excellence are also fundamental to everything we do and we’re honored to be recognized with awards for our success by organizations both globally and locally. Creating a diverse and inclusive culture where we all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we’re headed in the future. And we know that unique backgrounds, experiences, and perspectives help us think bigger, spark innovation and succeed together. If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements below. We’re interested in getting to know you and what you bring to the table! __________________________________ Responsibilities: Prepare project management reports and meeting minutes Manage all project documentation including contracts, budgets and schedules Maintain best practices templates on SharePoint site Administrative duties to include but not limited to: copying, coordinating travel arrangements, expense report preparation, organizing lunches, WebEx meetings, etc. Manage accounts receivables according to the guidelines and requirements set by the Facilities Manager, Operations Manager, or project team Ensure that all accounts receivables are maintained at a level not to exceed planned working capital charge as set by corporate finance, the project team and/or the Regional Operations Manager Assist local team in meeting targeting financial numbers as determined on a yearly basis by the Management Executive Committee Proactively manage project-related issues on account or assignment Demonstrate proficiency in the use and application of all project management Prepare PowerPoint presentations, memos, responses to proposals and research Actively collaborate with stakeholders and leverage platform support Assist with client communication, conferences, and events Maintain all files and documents related to project assignment Any and all other duties and tasks assigned Requirements/Qualifications: Bachelor’s degree from an accredited institution required 1-3 years of experience working in a similar role Detail oriented and organized- must have ability to proactively plan for multiple projects at a time Strong communication skills- both written and oral Proficient with Microsoft programs such as PowerPoint, Word, Outlook, etc. Must be a self-starter- able to start and complete projects independently Proactive – does not wait for tasks to be asked but always prompts to identify what else can be done. Customer Focus – dedicated to meeting the expectations and requirements of the external and internal customer, acts with customer in mind, establishes and maintains effective relationships with customers, and gains their trust and respect. Dealing with Ambiguity – can effectively cope with change, can shift gears comfortably, can decide and act without having the total picture Interpersonal Savvy – relates well to all kinds of people, inside and outside the organization uses diplomacy and tact Show more Show less
Posted 2 days ago
6.0 - 9.0 years
10 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Mandatory skills* AWS, KAFKA, ETL, Glue, Lamda, Tech stack experience Required Phyton, SQL
Posted 2 days ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description: We are seeking a Data Scientist with strong experience in Artificial Intelligence (AI), Machine Learning (ML), AWS cloud services, Big Data technologies, and Python programming. You will work on building and deploying data-driven solutions that solve real-world problems, enhance decision-making, and drive business value. Key Responsibilities: Develop and implement machine learning models and AI solutions for business use cases Work with large datasets using Big Data tools for data processing, transformation, and analysis Design, build, and deploy models on AWS cloud infrastructure Collaborate with data engineers, analysts, and product teams to understand requirements and deliver insights Communicate results and recommendations to technical and non-technical stakeholders Continuously evaluate and improve the performance of existing models Required Skills: Strong proficiency in Python and popular ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Pandas) Hands-on experience with AWS services (S3, EC2, SageMaker, Lambda, etc.) Experience with Big Data tools such as Spark, Hadoop, or similar Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Required Skills - Degree in Computer Science or a related discipline Minimum 4 years of relevant experience Fluency ability in Python or shell scripting Experience with data mining, modeling, mapping, and ETL process Experience with Azure Data Factory, Data Lake, Databricks, Synapse analytics, BI Dashboard, and BI implementation projects. Hands-on Experience in Hadoop, PySpark, and SQL Spark. Knowledge in Azure / AWS, RESTful Web Service, SOAP, SOA, Microsoft SQL Server, MySQL Server, and Agile methodology is an advantage Strong analytical, problem- solving, and communication skills Excellent command of both written and spoken English. Should be able to Design, Develop, Deliver & maintain Data Infrastructures. Mandatory Skill Set- Hadoop, Pyspark Preferred Skill Set- Hadoop, Pyspark Year of experience required- 4 - 8 Qualifications- B.E / B.Tech Required Skills Hadoop Cluster, PySpark Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: We are seeking a Data Scientist with strong expertise in AI/ML, AWS, Big Data, and Python to join our data-driven team. You will design, build, and deploy scalable machine learning models and data pipelines that drive key business decisions. Key Responsibilities: Develop and implement machine learning models for classification, regression, clustering, and forecasting. Work with large-scale structured and unstructured data using Big Data tools (e.g., Spark, Hadoop). Build data pipelines and deploy models on AWS cloud services (e.g., S3, SageMaker, Lambda, EMR). Conduct exploratory data analysis and feature engineering using Python (NumPy, Pandas, Scikit-learn, etc.). Collaborate with engineering, product, and analytics teams to integrate models into production systems. Continuously improve model accuracy and performance through testing and experimentation. Required Skills: Strong programming skills in Python and experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch). Experience with AWS services for data processing and model deployment. Familiarity with Big Data technologies like Hadoop, Spark, Hive, or similar. Show more Show less
Posted 2 days ago
1.0 - 3.0 years
3 - 5 Lacs
Bengaluru
Work from Office
Role Description Group Strategic Analytics (GSA) is part of Group Chief Operation Office (COO) which acts as the bridge between the Banks business and infrastructure functions to help deliver the efficiency, control, and transformation goals of the Bank. You will work within the Global Strategic Analytics Team as part of a global model strategy and deployment of Name List Screening and Transaction Screening. To be successful in that role, you will be familiar with the most recent data science methodologies and have a delivery-centric attitude, strong analytical skills, and a detail-oriented approach to breaking down complex matters into more understandable details. The purpose of Name List Screening and Transaction Screening is to identify and investigate unusual customer names and transactions and behavior, to understand if that activity is considered suspicious from a financial crime perspective, and to report that activity to the government. You will be responsible for helping to implement and maintain the models for Name List Screening and Transaction Screening to ensure that all relevant criminal risks, typologies, products, and services are properly monitored. We are looking for a high-performing Associate in financial crime model development, tuning, and analytics to support the global strategy for screening systems across Name List Screening (NLS) and Transaction Screening (TS). This role offers the opportunity to work on key model initiatives within a cross-regional team and contribute directly to the banks risk mitigation efforts against financial crime. You will support model tuning and development efforts, support regulatory deliverables, and help collaborate with cross-functional teams including Compliance, Data Engineering, and Technology. Your key responsibilities Support the design and implementation of the model framework for name and transaction screening including coverage, data, model development and optimisation. Support key data initiatives, including but not limited to, data lineage, data quality controls, and data quality issues management. Document model logic and liaise with Compliance and Model Risk Management teams to ensure screening systems and scenarios adhere to all model governance standards Participate in research projects on innovative solutions to make detection models more pro-active Assist in model testing, calibration and performance monitoring. Ensure detailed metrics & reporting are developed to provide transparency and maintain effectiveness of name and transaction screening models. Support all examinations and reviews performed by regulators, monitors, and internal audit Your skills and experience Advanced degree (Masters or PhD) in a quantitative discipline (Mathematics, Computer Science, Data Science, Physics or Statistics) 1-3 years experience in data analytics or model development (internships included). Proficiency in designing, implementing (python, spark, cloud environments) and deploying quantitative models in a large financial institution, preferably in Front Office. Hands-on approach needed. Experience utilizing Machine Learning and Artificial Intelligence Experience with data and the ability to clearly articulate data requirements as they relate to NLS and TS, including comprehensiveness, quality, accuracy and integrity Knowledge of the banks products and services, including those related to corporate banking, investment banking, private banking, and asset management.
Posted 2 days ago
8.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
Role Description Our team is part of the area Technology, Data, and Innovation (TDI) Private Bank. Within TDI, Partner data is the central client reference data system in Germany. As a core banking system, many banking processes and applications are integrated and communicate via -2k interfaces. From a technical perspective, we focus on mainframe but also build solutions on premise cloud, restful services, and an angular frontend. Next to the maintenance and the implementation of new CTB requirements, the content focus also lies on the regulatory and tax topics surrounding a partner/ client. We are looking for a very motivated candidate for the Cloud Data Engineer area. Your key responsibilities You are responsible for the implementation of the new project on GCP (Spark, Dataproc, Dataflow, BigQuery, Terraform etc) in the whole SDLC chain You are responsible for the support of the migration of current functionalities to Google Cloud You are responsible for the stability of the application landscape and support software releases You also support in L3 topics and application governance You are responsible in the CTM area for coding as part of an agile team (Java, Scala, Spring Boot) Your skills and experience You have experience with databases (BigQuery, Cloud SQl, No Sql, Hive etc.) and development preferably for Big Data and GCP technologies Strong understanding of Data Mesh Approach and integration patterns Understanding of Party data and integration with Product data Your architectural skills for big data solutions, especially interface architecture allows a fast start You have experience in at least: Spark, Java ,Scala and Python, Maven, Artifactory, Hadoop Ecosystem, Github Actions, GitHub, Terraform scripting You have knowledge in customer reference data, customer opening processes and preferably regulatory topics around know your customer processes You can work very well in teams but also independent and are constructive and target oriented Your English skills are good and you can both communicate professionally but also informally in small talks with the team.
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners. Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements. LiveRamp is looking for a Senior Software Development Engineer (SDE) to join our team focused on building scalable, high-performance backend systems that power our data collaboration clean room platform. In this role, you will design and implement distributed services that enable secure data processing across organizations. You’ll work with modern data infrastructure such as Apache Spark, Airflow, and cloud-native tools. You will help lead the development of reliable microservices that integrate with platforms like Snowflake, Databricks, and SingleStore. This is a hands-on role that requires deep technical expertise, strong system design skills, and a passion for solving complex large data challenges in a data collaborative environment. Must-Have Skills 5+ years of experience designing and implementing scalable backend systems and distributed services in high-performance environments. Proficiency or strong familiarity with Apache Spark and Apache Airflow in a Big Data enterprise context. Deep expertise in developing microservices using Java plus either Go or Python. Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure), with a solid understanding of each platform’s strengths and trade-offs. Familiarity with Kubernetes, Helm, Terraform, and cloud-native platforms such as Snowflake, Databricks, or SingleStore. Proven experience in leading and mentoring engineering teams. Strong ability to design complex, distributed systems with reliability and scale in mind. Nice To Have Experience with modern data platforms such as Apache Iceberg, or Trino. Benefits Flexible paid time off, paid holidays, options for working from home, and paid parental leave. Comprehensive Benefits Package: LiveRamp offers a comprehensive benefits package designed to help you be your best self in your personal and professional lives. Our benefits package offers medical, dental, vision, accident, life and disability, an employee assistance program, voluntary benefits as well as perks programs for your healthy lifestyle, career growth, and more. Your medical benefits extend to your dependents including parents. More About Us LiveRamp’s mission is to connect data in ways that matter, and doing so starts with our people. We know that inspired teams enlist people from a blend of backgrounds and experiences. And we know that individuals do their best when they not only bring their full selves to work but feel like they truly belong. Connecting LiveRampers to new ideas and one another is one of our guiding principles—one that informs how we hire, train, and grow our global team across nine countries and four continents. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp. To all recruitment agencies : LiveRamp does not accept agency resumes. Please do not forward resumes to our jobs alias, LiveRamp employees or any other company location. LiveRamp is not responsible for any fees related to unsolicited resumes. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.
These cities have a high concentration of tech companies and startups actively hiring for Spark roles.
The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
Salaries may vary based on the company, location, and specific job requirements.
In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect
Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.
Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases
Having a combination of these skills can make a candidate more competitive in the job market.
As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.