Home
Jobs
Companies
Resume

3305 Hive Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

0 Lacs

Gurgaon

On-site

Locations: Bengaluru | Gurgaon Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG's X team, you will work closely with consulting teams on a diverse range of advanced analytics and engineering topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytical and engineering subject matter expertise.As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, systems, and solutions that empower our clients to make informed business decisions. You will collaborate closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-quality data solutions that meet our clients' needs. YOU'RE GOOD AT Delivering original analysis and insights to case teams, typically owning all or part of an analytics module whilst integrating with a case team. Design, develop, and maintain efficient and robust data pipelines for extracting, transforming, and loading data from various sources to data warehouses, data lakes, and other storage solutions. Building data-intensive solutions that are highly available, scalable, reliable, secure, and cost-effective using programming languages like Python and PySpark. Deep knowledge of Big Data querying and analysis tools, such as PySpark, Hive, Snowflake and Databricks. Broad expertise in at least one Cloud platform like AWS/GCP/Azure.* Working knowledge of automation and deployment tools such as Airflow, Jenkins, GitHub Actions, etc., as well as infrastructure-as-code technologies like Terraform and CloudFormation. Good understanding of DevOps, CI/CD pipelines, orchestration, and containerization tools like Docker and Kubernetes. Basic understanding on Machine Learning methodologies and pipelines. Communicating analytical insights through sophisticated synthesis and packaging of results (including PPT slides and charts) with consultants, collecting, synthesizing, analyzing case team learning & inputs into new best practices and methodologies. Communication Skills: Strong communication skills, enabling effective collaboration with both technical and non-technical team members. Thinking Analytically You should be strong in analytical solutioning with hands on experience in advanced analytics delivery, through the entire life cycle of analytics. Strong analytics skills with the ability to develop and codify knowledge and provide analytical advice where required. What You'll Bring Bachelor's / Master's degree in computer science engineering/technology At least 4-6 years within relevant domain of Data Engineering across industries and work experience providing analytics solutions in a commercial setting. Consulting experience will be considered a plus. Proficient understanding of distributed computing principles including management of Spark clusters, with all included services - various implementations of Spark preferred. Basic hands-on experience with Data engineering tasks like productizing data pipelines, building CI/CD pipeline, code orchestration using tools like Airflow, DevOps etc.Good to have: - Software engineering concepts and best practices, like API design and development, testing frameworks, packaging etc. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge on web development technologies. Understanding of different stages of machine learning system design and development Who You'll Work With You will work with the case team and/or client technical POCs and border X team. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 1 hour ago

Apply

0 years

4 - 7 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Associate Managing Consultant, Advisors & Consulting Services, Performance Analytics Associate Managing Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Manage deliverable development and workstreams on projects across a range of industries and problem statements Contribute to and/or develop analytics strategies and programs for large, regional, and global clients by leveraging data and technology solutions to unlock client value Manage working relationship with client managers, and act as trusted and reliable partner Create predictive models using segmentation and regression techniques to drive profits Review analytics end-products to ensure accuracy, quality and timeliness. Proactively seek new knowledge and structures project work to facilitate the capture of Intellectual Capital with minimal oversight Team Collaboration & Culture Develop sound business recommendations and deliver effective client presentations Plan, organize, and structure own work and that of junior project delivery consultants to identify effective analysis structures to address client problems and synthesize analyses into relevant findings Lead team and external meetings, and lead or co-lead project management Contribute to the firm's intellectual capital and solution development Grow from coaching to enable ownership of day-to-day project management across client projects, and mentor junior consultants Develop effective working relationships with local and global teams including business partners Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Knowledge of metrics, measurements, and benchmarking to complex and demanding solutions across multiple industry verticals Data and analytics experience such as working with data analytics software (e.g., Python, R, SQL, SAS) and building, managing, and maintaining database structures Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred qualifications Additional data and analytics experience working with Hadoop framework and coding using Impala, Hive, or PySpark or working with data visualization tools (e.g., Tableau, Power BI) Experience managing tasks or workstreams in a collaborative team environment Experience coaching junior delivery consultants Relevant industry expertise MBA or master’s degree with relevant specialization (not required) Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 1 hour ago

Apply

175.0 years

0 Lacs

Gurgaon

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Join Team Amex and let's lead the way together. About the Team American Express is on a journey to provide the world’s best customer experience every day. The Commercial Data Office (CoDO) team, within Global Commercial Services (GCS), is focused on powering the best customer experience and business growth through streamlined data. With continuous changes in the regulatory environment and innovation through data evolving, we play a key role in strengthening GCS critical enablers and supporting new growth opportunities. In partnership with the Enterprise Chief Data Office and across GCS, CoDO is tasked with building new, innovative data solutions for our customers, while adhering to regulations and data management standard methodologies. The team’s scope is comprised of 3 pillars: Drive Strategic Growth & Revenue –Defining the vision and roadmap to transform data as a key asset to power business growth. Modernize Data Management –Ongoing data management, data discovery and collaboration across Global Commercial Services Ensure Health of the Commercial Business – Striving for 100% reliability of current data platforms and capabilities while developing more agility and scalability for the future Purpose of the Role This role would be focused on solutioning for GCS Data governance, Data Quality (DQ), and managing DQ Issues. This would involve partnering with stakeholders/GCS teams to gather requirements; performing Root Cause Analysis via data mining and analysis, identifying process gaps or monitoring existing solutions; proposing and presenting the possible solutions to the leaders/stakeholders and if required, working with technology for development of scalable Data Quality solutions. Responsibilities Accountable for the necessary remediation of data quality Issues originating from business processes and accountable for notifying key stakeholders on remediation activities Ensure data quality Issues are captured, managed, and have a plan for remediation that is thorough, complete, and timely to ensure a comprehensive fix is completed. Writing and executing SQL and python scripts to analyze/profile data and solution for data quality issues in a big data environment and propose robust data quality controls. Knowledge of business data and systems that your domain/business unit produce and consume. Participate in project meetings and communicate effectively with leaders, peers, architects, system analysts, project managers & others, reporting project status as required. Utilize data quality and data profiling tools to solution for DQ framework/issue. Understand, gather, and translate project/user requirements into well-defined features/user stories with testable acceptance criteria, via collaboration with multiple stakeholders on requirements and prioritization Work closely with the technology team to develop, test and deliver the defined deliverables. Perform Unit Acceptance Testing (UAT) before accepting the stories. Raise if any bugs/gaps are identified and get that fixed with technology team. Provide proactive solutioning to any sprint blockers. Understanding of data management concepts and practices such as data security, data quality, meta data. Responsible for building scalable data Quality frameworks for enterprise projects. Minimum Qualifications: Business Outcomes: Deliver high quality & accurate Data Quality solutioning within deadlines. Understanding the gaps and remediating the same as per business requirements. Accountable for the Agile delivery framework. Tracking progress and sharing project updates to ensure stakeholder satisfaction. Academic Background & Past Experience MBA or Bachelor/Advanced degree in science, computer science, information technology, information management 2-5 years of relevant experience as product/business analyst Functional Skills/Capabilities Strong stakeholder management and excellent written & verbal communication. Experience in requirement gathering, product management, backlog creation, project tracking and comprehensive documentation. Experience with Agile Scum process and principles and actively leading Agile ceremonies (Scrum, Grooming, Retrospective) Ability to multi-task with high precision & quality delivery. Ability to independently drive, track and ensure accurate delivery. Technical Skills/Capabilities Strong SQL/Python and MS Excel- data querying and analysis. JIRA for User Stories and Agile Scrum Management. Confluence, MS Word & PowerPoint for documentation. MS Excel knowledge and SQL/Hive for data analysis. ML experience a plus but not compulsory. Understanding of Data Governance, Data Quality, Data lineage, and Data Transformation is a plus. Preferred Qualifications: A good balance of technical knowledge and business acumen. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 hour ago

Apply

56.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join our Data Engineering team based in Gurugram and you will have the opportunity to work in a collaborative and dynamic environment. Our team plays a key role in implementing critical liquidity calculations, creating data visualisations, and delivering data to downstream systems. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? In this role, you will regularly exercise problem solving skills and apply creative solutions to a varied range of technical problems. You will support the development of data pipelines and new platform features and play a critical role with our operational and business stakeholders. What You Offer Proficient in Python coding with solid SQL experience (complex queries and DDL); Familiar with Docker, Kubernetes, AWS, and Linux/Unix environments; Knowledgeable in technical solutions, design patterns, and code for medium/complex applications in clustered environments; Experienced with big data querying tools (e.g., Redshift, Hive, Spark, Presto) and datapipeline orchestration tools (e.g., Airflow, Argo Workflows); and Skilled in API-based integration, source control (Bitbucket or similar), and security best practices. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process. Show more Show less

Posted 1 hour ago

Apply

4.0 - 6.0 years

0 Lacs

Noida

On-site

Noida 1 4 to 6 years Full Time Overview We are looking for a skilled and passionate Flutter Engineer (SDE 2) to join our mobile development team. In this role, you'll be responsible for building high-quality, cross-platform mobile applications that offer seamless and engaging user experiences. You will take ownership of key product features, collaborate with cross-functional teams, and apply engineering best practices to deliver scalable and maintainable code. This is a great opportunity to grow your expertise while making a meaningful impact in a fast-paced, product-driven environment. Responsibilities Design, develop, and maintain cross-platform mobile applications using Flutter and Dart. Collaborate with product managers, designers, and backend engineers to implement new features from API integration to UI/UX. Write clean, maintainable, and testable code while following industry best practices and architecture patterns. Troubleshoot and resolve bugs, performance bottlenecks, and technical issues. Maintain a customer-first mindset, ensuring a great user experience across all devices. Take ownership of modules or components, working both independently and collaboratively with the team. Stay updated with the latest Flutter and mobile development trends and technologies. Use version control tools like Git for efficient code collaboration and management. Participate in code reviews and provide thoughtful feedback to improve code quality and consistency. Contribute to CI/CD pipelines to ensure smooth and reliable app releases. Requirements Must Have Proven experience in developing and deploying mobile applications using Flutter and Dart. Strong understanding of Flutter architecture patterns such as BLoC, Provider, Riverpod, or MVVM. Good knowledge of mobile development principles, UI/UX design, and app architecture. Experience with RESTful API integration and a solid grasp of API design. Proficiency in debugging, performance profiling, and optimization. Strong problem-solving skills with a “build fast and iterate” mindset. Excellent communication and collaboration skills. Comfortable working in a dynamic, fast-paced environment. Good to Have Experience with state management solutions like Riverpod, GetX, or MobX. Familiarity with Flutter’s new features such as Flutter Web, Flutter Desktop, or integration with native modules. Exposure to automated testing (unit, widget, and integration tests) using tools like Mockito, flutter_test, etc. Understanding of local databases (e.g., SQLite, Hive, Drift). Experience with CI/CD tools and deployment to Play Store and App Store. Familiarity with animations and building rich UI/UX experiences. Understanding of SOLID principles and clean code practices.

Posted 1 hour ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job description: Job Description Do Research, design, develop, and modify computer vision and machine learning. algorithms and models, leveraging experience with technologies such as Caffe, Torch, or TensorFlow. - Shape product strategy for highly contextualized applied ML/AI solutions by engaging with customers, solution teams, discovery workshops and prototyping initiatives. - Help build a high-impact ML/AI team by supporting recruitment, training and development of team members. - Serve as evangelist by engaging in the broader ML/AI community through research, speaking/teaching, formal collaborations and/or other channels. Knowledge & Abilities: - Designing integrations of and tuning machine learning & computer vision algorithms - Research and prototype techniques and algorithms for object detection and recognition - Convolutional neural networks (CNN) for performing image classification and object detection. - Familiarity with Embedded Vision Processing systems - Open source tools & platforms - Statistical Modeling, Data Extraction, Analysis, - Construct, train, evaluate and tune neural networks Mandatory Skills: One or more of the following: Java, C++, Python Deep Learning frameworks such as Caffe OR Torch OR TensorFlow, and image/video vision library like OpenCV, Clarafai, Google Cloud Vision etc Supervised & Unsupervised Learning Developed feature learning, text mining, and prediction models (e.g., deep learning, collaborative filtering, SVM, and random forest) on big data computation platform (Hadoop, Spark, HIVE, and Tableau) *One or more of the following: Tableau, Hadoop, Spark, HBase, Kafka Experience: - 2-5 years of work or educational experience in Machine Learning or Artificial Intelligence - Creation and application of Machine Learning algorithms to a variety of real-world problems with large datasets. - Building scalable machine learning systems and data-driven products working with cross functional teams - Working w/ cloud services like AWS, Microsoft, IBM, and Google Cloud - Working w/ one or more of the following: Natural Language Processing, text understanding, classification, pattern recognition, recommendation systems, targeting systems, ranking systems or similar Nice to Have: - Contribution to research communities and/or efforts, including publishing papers at conferences such as NIPS, ICML, ACL, CVPR, etc. Education: BA/BS (advanced degree preferable) in Computer Science, Engineering or related technical field or equivalent practical experience Wipro is an Equal Employment Opportunity employer and makes all employment and employment-related decisions without regard to a person's race, sex, national origin, ancestry, disability, sexual orientation, or any other status protected by applicable law Product and Services Sales Manager ͏ ͏ ͏ ͏ Mandatory Skills: Generative AI . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 1 hour ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase Show more Show less

Posted 2 hours ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Design, develop, and maintain scalable data processing applications using Spark and PySpark API Development 5+ years of experience in at least one of the following: Java, Spark, scala, Python API Development expertise. Write efficient, reusable, and well-documented code. Design and implement data pipelines using tools like Spark and PySpark. Strong analytical and problem-solving abilities to address technical challenges. Perform code reviews and provide constructive feedback to improve code quality. Design and implement data processing tasks that integrate with SQL databases. Proficiency in data modeling, data lake, lakehouse, and data warehousing concepts. Experience with cloud platforms like AWS

Posted 2 hours ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Basic Functions 4-6 Years of experience in the enterprise application development & support using Microsoft technologies such as .Net, SQL, C#, MVC, Javascript, Jquery, ReactJS 2+ years of experience in Azure Cloud services such as – Synapse. Data Bricks and data factory, Azure app service, Kubernetes Experience in Data Modeling & Data Integration, Reporting, Data Governance & Security Source code available on Git, Coding champion and so on. Produce scalable and flexible, high-quality code that satisfies both functional and non-functional requirements Develop, deploy, test and maintain technical assets in a highly secure and integrated enterprise computing environment & Support functional testing and UI/UX testing Responsible for participating in architecture, data modeling, and overall design sessions. Co-ordinate with development & business teams to ensure the smooth execution of the project. Collaborate/communicate with on-site project team and business users as required Cross train & mentor team members to encourage knowledge sharing. Essential Functions Strong problem solving and analytical skills and the ability to “roll up your sleeves” and work to create timely solutions and resolutions, to validate, verify, communicate, and resolve application issues. Ability to work on multiple product features simultaneously. Quick learner with ability to understand product’s functionality end to end. Opportunity to try out bleeding edge technologies to provide POC, which will be evaluated and put on use if approved. Experience with Strong knowledge of algorithms, design patterns and fundamental computer science concepts & data structures Experience working in Agile methodologies (SCRUM) environment and familiar with iterative development cycles. Experience implementing authentication, authorization with OAuth and use of Single Sign On, SAML based authentication. Primary Internal Interactions Review with the Overall Product Manager & AVP for improvements in the product development lifecycle Assessment meeting with VP & above for additional product development features. Train & Mentor the juniors in the team Primary External Interactions Communicate with onshore stakeholder & Executive Team Members. Help the Product Management Group set the product roadmap & help in identifying future sellable product features. Client Interactions to better understands expectations & streamline solutions. If required should be a bridge between the client and the technology teams. Skills Technical Skills Required Skills Full stack developer experienced in ASP.net, C#, MVC, Javascript, JQuery, React & SQL server. Azure Cloud – Synapse. Data Bricks and data factory, Azure app service, Kubernetes Experience in migrating on prem application to Azure Cloud Skills Nice to Have Experience on Big Data Tools, not limited to – Python, PySpark, HIVE Expertise in US Healthcare Insurance. Stack overflow account score Technical blogs & technical write-ups Part of any open source contributions Certifications in Agile & Waterfall Methodologies Process Specific Skills Delivery Domain – Product Roadmap Development Business Domain - US Healthcare Insurance & Preventive Analytics Care Optimization Population Management Soft Skills Understanding of Healthcare business vertical and the business terms within Good analytical skills. Strong communication skills - oral and verbal Ability to work with various stakeholders across various geography Excellent Team player as well as an Individual Contributor if required. Working Hours General Shift – 11 AM to 8 PM Will be required to extend as per project release needs Show more Show less

Posted 2 hours ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

MoAt CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, tar gets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Senior Associate Data Scientist Location: Bangalore Business & Team: Home Buying Decision Science Impact & contribution: The Senior Associate Data Scientist will use technical knowledge and understanding of business domain to deliver moderate or highly complex data science projects independently or with minimal guidance. You will also engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Analyse complex data sets to extract insights and identify trends. Develop predictive models and algorithms to solve business problems. Work on deployment of models in production. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Clean, preprocess, and manipulate data for analysis through programming. Communicate findings and recommendations to stakeholders through reports and presentations. Stay updated with industry trends and best practices in data science. Contribute to the development and improvement of data infrastructure and processes. Design experiments and statistical analysis to validate hypotheses and improve models. Continuously learn and enhance skills in data science techniques and tools. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. More than 3 years of relevant experience. Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of commonly used data structures and algorithms. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Preferred Skills: Good to have: Knowledge and hands-on data engineering or model deployment Experience in Data Science in either of Credit Risk, Pricing Modelling and Monitoring, Sales and Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Education Qualifications : Bachelor’s degree in Engineering in Computer Science/Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 25/06/2025 Show more Show less

Posted 2 hours ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Aroma Hive is a passionate and growing food startup on a mission to revolutionise the way people experience everyday meals. Founded with a vision to blend quality, convenience, and innovation, we operate across three key verticals: In-house food manufacturing, cloud kitchens, and mobile food trucks. As a fully integrated food brand, we craft our own products in a dedicated manufacturing unit, ensuring consistency, hygiene, and bold flavours. Our cloud kitchens deliver freshly prepared meals straight to customers' doorsteps, while our food trucks bring the Aroma Hive experience to streets, events, and communities—making good food more accessible and exciting. Role Description This is a full-time on-site role for an On-field Sales Executive, located in Hyderabad. The Sales Executive will be responsible for visiting potential customers, demonstrating and explaining products, closing sales, maintaining relationships with existing customers, generating leads and reporting sales activities. The role also involves setting and meeting sales targets, preparing sales reports, and continuously improving sales techniques. Experience Required: Fresher - 3 years Salary: As per industry standards Qualifications Strong interpersonal and communication skills. Ability to work independently and demonstrate initiative. Basic knowledge of sales techniques and strategies. Willingness to travel within the assigned territory. Bachelor's degree in Business, Marketing, or related field is a plus. Familiarity with the local market in Hyderabad is beneficial. Interested candidates can share their resume at: 📧 bhavaniaromahive@gmail.com Show more Show less

Posted 2 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Manager–Data Science Location: Full-Time / On-Site / Hyderabad Company Overview A global organization delivering advanced AI, data science, analytics, and technology solutions to enterprises across various industries. Focused on enabling digital transformation, the company builds scalable, data-driven strategies that improve operational efficiency, drive innovation, and unlock sustainable business value. Role Summary The Manager–Data Science is a strategic and hands-on leadership role responsible for driving end-to-end AI/ML initiatives. This role combines deep technical expertise with team leadership and cross-functional collaboration. Ideal candidates will bring strong experience in machine learning, generative AI, and data-driven problem solving, along with a working knowledge of cloud platforms such as AWS, Azure, or GCP. Key Responsibilities Technical Leadership & Delivery • Design, build, and deploy scalable ML and Gen AI models. • Execute end-to-end ML pipelines : data preprocessing, training, evaluation, and deployment. • Collaborate with data engineering teams to ensure seamless integration and performance in production. • Automate workflows and improve model lifecycle efficiency. • Translate business needs into actionable data science problems and solutions. Generative AI & RAG • Develop solutions using Generative AI frameworks. • Implement RAG pipelines with: Fine-tuning techniques like QLoRA Document chunking and ingestion strategies Integration with vector databases (e.g., FAISS, Pinecone, Weaviate) Performance evaluation using appropriate metrics/frameworks Machine Learning & Forecasting Apply classical ML algorithms with techniques like L1/L2 regularization , feature selection, and model interpretation. Build forecasting models and evaluate them using MAPE, SMAPE, etc. Interpret model behavior during under/over forecasting scenarios. Practice Growth • Contribute to the AI/ML practice by developing reusable assets and internal tools. • Lead innovation initiatives and stay current with emerging tools, models, and frameworks. • Support business development through PoCs and technical strategy. People Leadership • Mentor and guide a team of data scientists. • Set clear performance goals, conduct regular feedback, and foster a culture of learning and innovation. • Work cross-functionally with stakeholders to ensure successful delivery. Requirements • Bachelor’s/Master’s degree in Computer Science, Data Science, Statistics, or related field with 8+ years of hands-on experience in AI/ML across industries like Retail, BFSI, Healthcare, or eCommerce. • Strong proficiency in Python and SQL; experience with ML pipelines, cloud platforms ( AWS, Azure, or GCP), and big data tools (Hadoop, Hive, PySpark). • Solid understanding of ML algorithms, model interpretation, L1/L2 regularization, feature selection, and forecasting metrics (MAPE, SMAPE). • Practical expertise in Generative AI and RAG pipelines—QLoRA fine-tuning, document chunking, ingestion strategies, vector databases, and evaluation frameworks. • Hands-on experience in Speech-to-Text NLP and audio data processing, with the ability to integrate voice-based data into ML workflows • Strong problem-solving and analytical mindset, with the ability to communicate complex ideas clearly, lead teams, and collaborate with cross-functional stakeholders in a fast-paced environment. Note: We're keeping the process inclusive — whether you're immediately available, serving notice, or have a 60-day notice period, you're welcome to apply. Show more Show less

Posted 2 hours ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description ABOUT CLOUDBEES CloudBees provides the leading software delivery platform for enterprises, enabling them to continuously innovate, compete, and win in a world powered by the digital experience. Designed for the world's largest organizations with the most complex requirements, CloudBees enables software development organizations to deliver scalable, compliant, governed, and secure software from the code a developer writes to the people who use it. The platform connects with other best of breed tools, improves the developer experience, and enables organizations to bring digital innovation to life continuously, adapt quickly, and unlock business outcomes that create market leaders and disruptors. CloudBees was founded in 2010 and is backed by Goldman Sachs, Morgan Stanley,Bridgepoint Credit, HSBC, Golub Capital, Delta-v Capital, Matrix Partners, and Lightspeed Venture Partners. Visit www.cloudbees.com and follow us on Twitter, LinkedIn, and Facebook. WHAT YOU’LL DO! These are some of the tasks that you’ll be engaged on: Design, develop, and maintain automated test scripts using Playwright with TypeScript/JavaScript, as well as Selenium with Java, to ensure comprehensive test coverage across applications. Enhance the existing Playwright framework by implementing modular test design and optimizing performance, while also utilizing Cucumber for Behavior-Driven Development (BDD) scenarios. Execute functional, regression, integration, performance, and security testing of web applications, APIs and microservices. Collaborate in an Agile environment, participating in daily stand-ups, sprint planning, and retrospectives to ensure alignment on testing strategies and workflows. Troubleshoot and analyze test failures and defects using debugging tools and techniques, including logging and tracing within Playwright, Selenium, Postman, Grafana, etc. Document and report test results, defects, and issues using Jira and Confluence, ensuring clarity and traceability for all test activities. Implement page object models and reusable test components in both Playwright and Selenium to promote code reusability and maintainability. Integrate automated tests into CI/CD pipelines using Jenkins and GitHub Actions, ensuring seamless deployment and testing processes. Collaborate on Git for version control, managing branches and pull requests to maintain code quality and facilitate teamwork. Mentor and coach junior QA engineers on best practices for test automation, Playwright usage, and CI/CD workflows. Research and evaluate new tools and technologies to enhance testing processes and coverage. WHAT DO YOU NEED TO SHINE IN THIS ROLE? Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent work experience. At least 5 years of experience in software testing, with at least 3 years of experience in test automation. Ability to write functional test, test plan and test strategies Ability to configure test environment and test data using automation tools Experience in creation of an automated regress / CI test suite using Cucumber with Playwright (Preferred) or Selenium and Rest APIs Proficient in one or more programming languages - Java, Javascript or Typescript. Experience in testing web applications, APIs, and microservices using various tools and frameworks such as Selenium, Cucumber etc. Experience in working with cloud platforms such as AWS, Azure, GCP, etc. Experience in working with CI/CD tools such as Jenkins, GitLab, GitHub, etc. Experience in writing queries and working with databases such as Postgres, Cassandra etc. Experience in working with tools such as Postman, JMeter, Grafana, etc. Experience in working with Agile methodologies such as Scrum, Kanban, etc. Ability to work independently and as part of a team. Ability to learn new technologies and tools quickly and adapt to changing requirements. Highly analytical mindset, logical approach to find solutions and perform root cause analysis Able to prioritize between critical and non critical path items Excellent communication skills with ability to communicate test results to stakeholders in the functional aspect of the system and its impact. What You’ll Get Highly competitive compensation, benefits, and vacation package Ability to work for one of the fastest growing companies with some of the most talented people in the industry Team outings Fun, Hardworking, and Casual Environment Endless Growth Opportunities We have a culture of movers and shakers and are leading the way for everyone else with a vision to transform the industry. We are authentic in who we are. We believe in our abilities and strengths to change the world for the better. Being inclusive and working together is at the heart of everything we do. We are naturally curious. We ask the right questions, challenge what can be done differently and come up with intelligent solutions to the problems we find. If that’s you, get ready to bee impactful and join the hive. We have a culture of movers and shakers and are leading the way for everyone else with a vision to transform the industry. We are authentic in who we are. We believe in our abilities and strengths to change the world for the better. Being inclusive and working together is at the heart of everything we do. We are naturally curious. We ask the right questions, challenge what can be done differently and come up with intelligent solutions to the problems we find. If that’s you, get ready to bee impactful and join the hive. Scam Notice Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of CloudBees. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that CloudBees will never ask for any personal account information, such as cell phone, credit card details or bank account numbers, during the recruitment process. Additionally, CloudBees will never send you a check for any equipment prior to employment. All communication from our recruiters and hiring managers will come from official company email addresses (@cloudbees.com) or from Paylocity and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent CloudBees and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at tahelp@cloudbees.com. We take these matters very seriously and will work to ensure that any fraudulent activity is reported and dealt with appropriately. If you feel like you have been scammed in the US, please report it to the Federal Trade Commission at: https://reportfraud.ftc.gov/#/. In Europe, please contact the European Anti-Fraud Office at: https://anti-fraud.ec.europa.eu/olaf-and-you/report-fraud_en Signs of a Recruitment Scam Ensure there are no other domains before or after @cloudbees.com. For example: “name.dr.cloudbees.com” Check any documents for poor spelling and grammar – this is often a sign that fraudsters are at work. If they provide a generic email address such as @Yahoo or @Hotmail as a point of contact. You are asked for money, an “administration fee”, “security fee” or an “accreditation fee”. You are asked for cell phone account information. Show more Show less

Posted 3 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are hiring for one the IT big4 consulting Designation: - Associate/Associate Consultant Location : - Chennai/Gurgaon/Pune Skills Req-: AWS (Big Data services) - S3, Glue, Athena, EMR Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt Data warehouse - ETL, Redshift / Snowflake Key Responsibilities: - Work with business stakeholders to understand their business needs. - Create data pipelines that extract, transform, and load (ETL) from various sources into a usable format in a Data warehouse. - Clean, filter, and validate data to ensure it meets quality and format standards. - Develop data model objects (tables, views) to transform the data into unified format for downstream consumption. - Expert in monitoring, controlling, configuring, and maintaining processes in cloud data platform. - Optimize data pipelines and data storage for performance and efficiency. - Participate in code reviews and provide meaningful feedback to other team members. - Provide technical support and troubleshoot issue(s). Qualifications: - Bachelor’s degree in computer science, Information Technology, or a related field, or equivalent work experience. - Experience working in the AWS cloud platform. - Data engineer with expertise in developing big data and data warehouse platforms. - Experience working with structured and semi-structured data. - Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques. - Experience working directly with technical and business teams. - Able to create technical documentation. - Excellent problem-solving and analytical skills. - Strong communication and collaboration abilities. Skillset (good to have) - Experience in data modeling. - Certified in AWS platform for Data Engineer skills. - Experience with ITSM processes/tools such as ServiceNow, Jira - Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow Show more Show less

Posted 4 hours ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 5 hours ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 5 hours ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Background Praan (Praan, Inc.) is an impact focused deep-tech startup democratizing clean air using breakthrough filterless technology. The company is backed by top tier VCs and CXOs globally and currently operates between the United States and India. Our team puts extreme attention to detail and loves building technology that's aspirational. Praan's team and culture is positioned to empower people to solve large global problems at an accelerated pace. Why Everyone worries about the dooms-day in climate change which is expected to occur in the 2050s. However, there's one doom's day which is the reality for millions of people around the world today. Air pollution takes more than 7 Million lives globally every single year. Over 5% of premature children death occur due to air pollution in developing countries. Everyone has relied on governments or experts to solve the problem, but most solutions up until today have either been too expensive or too ineffective. Praan is an attempt at making the future cleaner, healthier, and safer for the generations to come. Job Description Supervise, monitor, and coordinate all production activities across the HIVE and MKII assembly lines Ensure adherence to daily, weekly, and monthly production targets while maintaining product quality and minimizing downtime Implement and sustain Kaizen, 5S, and other continuous improvement initiatives to enhance line efficiency and reduce waste Overlook daily start of day and end of day inventory reporting Ensure line balancing for optimal resource utilization and minimal bottlenecks Monitor and manage manpower deployment, shift scheduling, absentee management and skill mapping to maintain productivity Drive quality standards by coordinating closely with Manufacturing Lead Track and analyze key production KPIs (OEE, yield, downtime) and initiate corrective actions Ensure adherence to SOPs, safety protocols, and compliance standards Support new product introductions (NPIs) or design changes in coordination with R&D/engineering teams Train and mentor line operators and line leaders, ensuring training, skill development, and adherence to performance standards. Monitor and report on key production metrics, including output, downtime, efficiency, scrap rates, and productivity, ensuring targets are met consistently Maintain documentation and reports related to production planning, line output, incidents, and improvements Skill Requirements Diploma/Bachelor's degree in Mechanical, Production, Electronics, Industrial Engineering, or related field 4–8 years of hands-on production supervision experience in a high-volume manufacturing environment managing the production of multiple products Proven expertise in Kaizen, Lean Manufacturing, Line Balancing, and Shop Floor Management Proven ability to manage large teams, allocate resources effectively, and meet production targets in a fast-paced, dynamic environment Experience with production planning, manpower management, and problem-solving techniques (like 5 Why, Fishbone, etc.) Strong understanding of manufacturing KPIs and process documentation Excellent leadership, communication, and conflict-resolution skills Hands-on attitude with a willingness to work on-ground Experience in automotive, consumer electronics, or similar high-volume industries Praan is an equal opportunity employer and does not discriminate based on race, religion, caste, gender, disability or any other criteria. We just care about working with great human beings! Show more Show less

Posted 5 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 6 hours ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 6 hours ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At o9 Solutions, our mission is clear: be the Most Valuable Platform (MVP) for enterprises. With our AI-driven platform — the o9 Digital Brain — we integrate global enterprises’ siloed planning capabilities, helping them capture millions and, in some cases, billions of dollars in value leakage. But our impact doesn’t stop there. Businesses that plan better and faster also reduce waste, which drives better outcomes for the planet, too. We're on the lookout for the brightest, most committed individuals to join us on our mission. Along the journey, we’ll provide you with a nurturing environment where you can be part of something truly extraordinary and make a real difference for companies and the plane t What you’ll do for us: Apply a variety of machine learning techniques (clustering, regression, ensemble learning, neural nets, time series, optimizations etc.) to their real-world advantages/drawbacks Develop and/or optimize models for demand sensing/forecasting, optimization (Heuristic, LP, GA etc), Anomaly detection, Simulation and stochastic models, Market Intelligence etc. Use latest advancements in AI/ML to solve business problems Analyze problems by synthesizing complex information, evaluating alternate methods, and articulating the result with the relevant assumptions/reasons Application of common business metrics (Forecast Accuracy, Bias, MAPE) and the ability to generate new ones as needed. Develop or optimize modules to call web services for real time integration with externa systems Work collaboratively with Clients, Project Management, Solution Architects, Consultants and Data Engineers to ensure successful delivery of o9 projects What you’ll have: Experience: 4+ Years of experience in time series forecasting in scale using heuristic-based hierarchical best-fit models using algorithms like exponential smoothing, ARIMA, prophet and custom parameter tuning. Experience in applied analytical methods in the field of Supply chain and planning, like demand planning, supply planning, market intelligence, optimal assortments/pricing/inventory etc. Should be from a statistical background. Education: Bachelors Degree in Computer Science, Mathematics, Statistics, Economics, Engineering or related field Languages: Python and/or R for Data Science Skills: Deep Knowledge of statistical and machine learning algorithms, building scalable ML frameworks, identifying and collecting relevant input data, feature engineering, tuning, and testing. Characteristics: Independent thinkers Strong presentation and communications skills We really value team spirit: Transparency and frequent communication is key. At o9, this is not limited by hierarchy, distance, or function. Nice to have: Experience with SQL, databases and ETL tools or similar is optional but preferred Exposure to distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, or related Big Data technologies Experience with Deep Learning frameworks such as Keras, Tensorflow or PyTorch is preferable Experience in implementing planning applications will be a plus Understanding of Supply Chain Concepts will be preferable Masters Degree in Computer Science, Applied Mathematics, Statistics, Engineering, Business Analytics, Operations, or related field What we’ll do for you Competitive salary with stock options to eligible candidates Flat organization: With a very strong entrepreneurial culture (and no corporate politics) Great people and unlimited fun at work Possibility to make a difference in a scale-up environment. Opportunity to travel onsite in specific phases depending on project requirements. Support network: Work with a team you can learn from everyday. Diversity: We pride ourselves on our international working environment. Work-Life Balance: https://youtu.be/IHSZeUPATBA?feature=shared Feel part of A team: https://youtu.be/QbjtgaCyhes?feature=shared How the process works Apply by clicking the button below You’ll be contacted by our recruiter, who’ll fill you in on all things at o9, give you some background about the role and get to know you. They’ll contact you either via video call or phone call - whatever you prefer. During the interview phase, you will meet with technical panels for 60 minutes. The recruiter will contact you after the interview to let you know if we’d like to progress your application. We will have 2 rounds of Technical discussion followed by a Hiring Manager discussion. Our recruiter will let you know if you’re the successful candidate. Good luck! More about us … With the latest increase in our valuation from $2.7B to $3.7B despite challenging global macroeconomic conditions, o9 Solutions is one of the fastest-growing technology companies in the world today. Our mission is to digitally transform planning and decision-making for the enterprise and the planet. Our culture is high-energy and drives us to aim 10x in everything we do. Our platform, the o9 Digital Brain, is the premier AI-powered, cloud-native platform driving the digital transformations of major global enterprises including Google, Walmart, ABInBev, Starbucks and many others. Our headquarters are located in Dallas, with offices in Amsterdam, Paris, London, Barcelona, Madrid, Sao Paolo, Bengaluru, Tokyo, Seoul, Milan, Stockholm, Sydney, Shanghai, Singapore an d Munich. o9 is an equal opportunity employer and seeks applicants of diverse backgrounds and hires without regard to race, colour, gender, religion, national origin, citizenship, age, sexual orientation or any other characteristic protected by law Show more Show less

Posted 8 hours ago

Apply

0 years

0 Lacs

Kalaburagi, Karnataka, India

On-site

Linkedin logo

Responsibilities Ability to write clean, maintainable, and robust code in Python Understanding and expertise of software engineering concepts and best practices Knowledge of testing frameworks and libraries Experience with analytics (descriptive, predictive, EDA), feature engineer, algorithms, anomaly detection, data quality assessment and python visualization libraries - e.g. matplotlib, seaborn or other Comfortable with notebook and source code development - Jupyter, Pycharm/VScode Hands-on experience of technologies like Python, Spark/Pyspark, Hadoop/MapReduce/HIVE, Pandas etc. Familiarity with query languages and database technologies, CI/CD, testing and validation of data and software Tech stack and activities that you would use and preform on a daily basis : Python Spark (PySpark) Jupyter SQL and No-SQL DBMS Git (as source code versioning and CI/CD) Exploratory Data Analysis (EDA) Imputation Techniques Data Linking / Cleansing Feature Engineering Apache Airflow/ Jenkins scheduling and automation, Github and Github Actions Collaborative - able to build strong relations that enable robust debate, and resolve periodic disagreements regarding priorities. Excellent interpersonal, and communication skills Ability to communicate effectively with technical and non-technical audience Ability to work under pressure with a solid sense for setting priorities Ability to lead technical work with strong sense of ownership Strong command of English language (both verbal and written) Practical and action oriented Compelling communicator Excellent stakeholder management Foster and promote entrepreneurial spirit and curiosity amongst team members Team player Quick learner (ref:hirist.tech) Show more Show less

Posted 11 hours ago

Apply

175.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Join Team Amex and let's lead the way together. About the Team American Express is on a journey to provide the world’s best customer experience every day. The Commercial Data Office (CoDO) team, within Global Commercial Services (GCS), is focused on powering the best customer experience and business growth through streamlined data. With continuous changes in the regulatory environment and innovation through data evolving, we play a key role in strengthening GCS critical enablers and supporting new growth opportunities. In partnership with the Enterprise Chief Data Office and across GCS, CoDO is tasked with building new, innovative data solutions for our customers, while adhering to regulations and data management standard methodologies. The team’s scope is comprised of 3 pillars: Drive Strategic Growth & Revenue –Defining the vision and roadmap to transform data as a key asset to power business growth. Modernize Data Management –Ongoing data management, data discovery and collaboration across Global Commercial Services Ensure Health of the Commercial Business – Striving for 100% reliability of current data platforms and capabilities while developing more agility and scalability for the future Purpose of the Role This role would be focused on solutioning for GCS Data governance, Data Quality (DQ), and managing DQ Issues. This would involve partnering with stakeholders/GCS teams to gather requirements; performing Root Cause Analysis via data mining and analysis, identifying process gaps or monitoring existing solutions; proposing and presenting the possible solutions to the leaders/stakeholders and if required, working with technology for development of scalable Data Quality solutions. Responsibilities Accountable for the necessary remediation of data quality Issues originating from business processes and accountable for notifying key stakeholders on remediation activities Ensure data quality Issues are captured, managed, and have a plan for remediation that is thorough, complete, and timely to ensure a comprehensive fix is completed. Writing and executing SQL and python scripts to analyze/profile data and solution for data quality issues in a big data environment and propose robust data quality controls. Knowledge of business data and systems that your domain/business unit produce and consume. Participate in project meetings and communicate effectively with leaders, peers, architects, system analysts, project managers & others, reporting project status as required. Utilize data quality and data profiling tools to solution for DQ framework/issue. Understand, gather, and translate project/user requirements into well-defined features/user stories with testable acceptance criteria, via collaboration with multiple stakeholders on requirements and prioritization Work closely with the technology team to develop, test and deliver the defined deliverables. Perform Unit Acceptance Testing (UAT) before accepting the stories. Raise if any bugs/gaps are identified and get that fixed with technology team. Provide proactive solutioning to any sprint blockers. Understanding of data management concepts and practices such as data security, data quality, meta data. Responsible for building scalable data Quality frameworks for enterprise projects. Minimum Qualifications: Business Outcomes: Deliver high quality & accurate Data Quality solutioning within deadlines. Understanding the gaps and remediating the same as per business requirements. Accountable for the Agile delivery framework. Tracking progress and sharing project updates to ensure stakeholder satisfaction. Academic Background & Past Experience MBA or Bachelor/Advanced degree in science, computer science, information technology, information management 2-5 years of relevant experience as product/business analyst Functional Skills/Capabilities Strong stakeholder management and excellent written & verbal communication. Experience in requirement gathering, product management, backlog creation, project tracking and comprehensive documentation. Experience with Agile Scum process and principles and actively leading Agile ceremonies (Scrum, Grooming, Retrospective) Ability to multi-task with high precision & quality delivery. Ability to independently drive, track and ensure accurate delivery. Technical Skills/Capabilities Strong SQL/Python and MS Excel- data querying and analysis. JIRA for User Stories and Agile Scrum Management. Confluence, MS Word & PowerPoint for documentation. MS Excel knowledge and SQL/Hive for data analysis. ML experience a plus but not compulsory. Understanding of Data Governance, Data Quality, Data lineage, and Data Transformation is a plus. Preferred Qualifications: A good balance of technical knowledge and business acumen. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 11 hours ago

Apply

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Additional Job Description We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Responsibilities Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Implementing data wrangling, scarping, cleaning using both Java or Python Strong experience on data structure. Extensively work on API integration. Monitoring performance and advising any necessary infrastructure changes Defining data retention policies Skills And Qualifications Proficient understanding of distributed computing principles Proficient in Java or Pyhton and some part of machine learning Proficiency with Hadoop v2, MapReduce, HDFS,Pyspark,Spark Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark Experience with integration of data from multiple data sources Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of various ETL techniques and frameworks, such as Flume Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Experience with Cloudera/MapR/Hortonworks Qualifications: 0-2 years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 12 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Position Overview We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience with AWS Glue, Apache Airflow, Kafka, SQL, Python and DataOps tools and technologies. Knowledge of SAP HANA & Snowflake is a plus. This role is critical for designing, developing, and maintaining our clients data pipeline architecture, ensuring the efficient and reliable flow of data across the organization. Key Responsibilities Design, Develop, and Maintain Data Pipelines : Develop robust and scalable data pipelines using AWS Glue, Apache Airflow, and other relevant technologies. Integrate various data sources, including SAP HANA, Kafka, and SQL databases, to ensure seamless data flow and processing. Optimize data pipelines for performance and reliability. Data Management And Transformation Design and implement data transformation processes to clean, enrich, and structure data for analytical purposes. Utilize SQL and Python for data extraction, transformation, and loading (ETL) tasks. Ensure data quality and integrity through rigorous testing and validation processes. Collaboration And Communication Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs. Collaborate with cross-functional teams to implement DataOps practices and improve data life cycle management. Monitoring And Optimization Monitor data pipeline performance and implement improvements to enhance efficiency and reduce latency. Troubleshoot and resolve data-related issues, ensuring minimal disruption to data workflows. Implement and manage monitoring and alerting systems to proactively identify and address potential issues. Documentation And Best Practices Maintain comprehensive documentation of data pipelines, transformations, and processes. Adhere to best practices in data engineering, including code versioning, testing, and deployment procedures. Stay up-to-date with the latest industry trends and technologies in data engineering and DataOps. Required Skills And Qualifications Technical Expertise : Extensive experience with AWS Glue for data integration and transformation. Proficient in Apache Airflow for workflow orchestration. Strong knowledge of Kafka for real-time data streaming and processing. Advanced SQL skills for querying and managing relational databases. Proficiency in Python for scripting and automation tasks. Experience with SAP HANA for data storage and management. Familiarity with DataOps tools and methodologies for continuous integration and delivery in data engineering. Preferred Skills Knowledge of Snowflake for cloud-based data warehousing solutions. Experience with other AWS data services such as Redshift, S3, and Athena. Familiarity with big data technologies such as Hadoop, Spark, and Hive. Soft Skills Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Detail-oriented with a commitment to data quality and accuracy. Ability to work independently and manage multiple projects simultaneously. (ref:hirist.tech) Show more Show less

Posted 12 hours ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

What Youll Do Architect and scale modern data infrastructure: ingestion, transformation, warehousing, and access Define and drive enterprise data strategygovernance, quality, security, and lifecycle management Design scalable data platforms that support both operational insights and ML/AI applications Translate complex business requirements into robust, modular data systems Lead cross-functional teams of engineers, analysts, and developers on large-scale data initiatives Evaluate and implement best-in-class tools for orchestration, warehousing, and metadata management Establish technical standards and best practices for data engineering at scale Spearhead integration efforts to unify data across legacy and modern platforms What You Bring Experience in data engineering, architecture, or backend systems Strong grasp of system design, distributed data platforms, and scalable infrastructure Deep hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Redshift, BigQuery, Snowflake, S3, Lambda Expertise in data modeling (OLTP/OLAP), ETL pipelines, and data warehousing Experience with big data ecosystems: Kafka, Spark, Hive, Presto Solid understanding of data governance, security, and compliance frameworks Proven track record of technical leadership and mentoring Strong collaboration and communication skills to align tech with business Bachelors or Masters in Computer Science, Data Engineering, or a related field Nice To Have (Your Edge) Experience with real-time data streaming and event-driven architectures Exposure to MLOps and model deployment pipelines Familiarity with data DevOps and Infra as Code (Terraform, CloudFormation, CI/CD pipelines) (ref:hirist.tech) Show more Show less

Posted 12 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies