Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Description Where you ll work: India (Remote) Engineering at GoTo We re the trailblazers of remote work technology. We build powerful, flexible work software that empowers everyone to live their best life, at work and beyond. And blaze even more trails along the way. There s ample room for growth - so you can blaze your own trail here too. When you join a GoTo product team, you ll take on a key role in this process and see your work be used by millions of users worldwide. Your Day to Day As a Senior Software Engineer - Bigdata, you would be: Design, develop, and maintain robust, scalable, and efficient ETL/ELT pipelines to process structured and unstructured data from various sources. Expertise in Python Programming. Leverage AWS services (e.g., S3, EKS, Lambda, EMR) to architect and implement cloud-native data solutions. Work with Apache Spark and Databricks to process large-scale datasets, optimize performance, and build reusable data transformations. Design and implement data models (both relational and dimensional) that support analytics, reporting, and machine learning use cases. Schedule, monitor, and orchestrate workflows using Apache Airflow or equivalent tools. Collaborate with analysts, data scientists, and business stakeholders to deliver trusted, high-quality data for downstream consumption. Build data quality checks, logging, monitoring, and alerting to ensure pipeline reliability and visibility. Develop SQL-based transformations and optimize queries for performance in cloud data warehouses and lakehouses. Enable data-driven decisions by supporting self-service BI tools like Tableau, ensuring accurate and timely data availability. Ensure adherence to data governance, security, and compliance requirements. Mentor junior engineers and contribute to engineering best practices, including CI/CD, testing, and documentation. What We re Looking For As a Senior Software Engineer - Bigdata, your background will look like: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or similar roles, with proven ability to build and scale end-to-end data pipelines. Strong expertise in ETL/ELT development, data ingestion, and transformation using SQL and scripting languages (Python preferred). Hands-on experience with Apache Spark and Databricks for big data processing. In-depth knowledge of AWS services such as S3, Hive, Lambda, and EMR. Proficient in data modeling, including dimensional and normalized models. Experience with Airflow or similar orchestration frameworks. Familiarity with BI tools like Tableau for reporting and dashboarding. Strong understanding of data warehousing, lakehouse architectures, and modern data stack concepts. Excellent problem-solving skills, communication, and the ability to work in an agile and collaborative environment. At GoTo, authenticity and inclusive culture are key to our thriving workplace, where diverse perspectives drive innovation and growth. Our team of GoGetters is passionate about learning, exploring, and working together to achieve success while staying committed to delivering exceptional experiences for our customers. We take pride in supporting our employees with comprehensive benefits, wellness programs, and global opportunities for professional and personal development. By maintaining an inclusive environment, we empower our teams to do their best work, make a meaningful impact, and grow their career. Learn more .
Posted 1 week ago
3.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. One of the key focus areas is Traffic Quality where we endeavour to identify non-human and invalid traffic within programmatic ad sources, and weed them out to ensure a high quality advertising marketplace. We do this by building machine learning and optimization algorithms that operate at scale, and leverage nuanced features about user, context, and creative engagement to determine the validity of traffic. The challenge is to stay one step ahead by investing in deep analytics and developing new algorithms that address emergent attack vectors in a structured and scalable fashion. We are committed to building a long-term traffic quality solution that encompasses all Amazon advertising channels and provides industry leading traffic filtering leveraging GenAI and state-of-the-art deep learning techniques. Our systems preserve advertiser trust and saves them hundreds of millions of dollars of wasted spend. A Data Scientist is responsible for delivering deep data-driven analyses with insights that drive the business. They would use a combination of analytics, data visualization and machine learning to identify gaps in current solutions as well as prototype new algorithms that close the gaps. The ideal candidate should have strong experience in dive deep analytics and data visualization, thorough knowledge of statistical techniques and strong breadth in machine learning. The candidate should also have good programming and design skills to implement machine learnings algorithms in practice on massive unstructured datasets. 3+ years of data scientist experience 4+ years of data querying languages (e.g. SQL), scripting languages (e.g. Python) or statistical/mathematical software (e.g. R, SAS, Matlab, etc.) experience 3+ years of machine learning/statistical modeling data analysis tools and techniques, and parameters that affect their performance experience Experience applying theoretical models in an applied environment Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with big data: processing, filtering, and presenting large quantities (100K to Millions of rows) of data Experience in a ML or data scientist role with a large technology company Experience in Computational Advertising
Posted 1 week ago
2.0 - 7.0 years
25 - 30 Lacs
Bengaluru
Work from Office
As industries race to embrace AI, traditional database solutions fall short of rising demands for versatility, performance, and affordability. Couchbase is leading the way with Capella, the developer data platform for critical applications in our AI world. By uniting transactional, analytical, mobile, and AI workloads into a seamless, fully managed solution, Couchbase empowers developers and enterprises to build and scale applications with unmatched flexibility, performance, and cost-efficiency from cloud to edge. Trusted by over 30% of the Fortune 100, Couchbase is unlocking innovation, accelerating AI transformation, and redefining customer experiences. Come join our mission. Job description Build cutting edge products that are at the intersection of AI, Databases and Data Processing. Discuss and debate with your peers as you help figure out product requirements and the architectural approach to getting things built, tested supported. Design, implement, test, troubleshoot support needle-mover capabilities across server, mobile, sdks, cloud and AI with simplicity, elegance and economy. Think quality; think leverage. Develop high-quality software and use unit, component, and end-to-end automation tests so we know we have high-quality software Eligibility Should have completed Education: BE/ B-Tech / M-Tech with strong Computer Science Fundamentals. 2+ years of Experience in working with languages like Go, Java, Python or C/C++. You think distributed systems problems, AI Models, and data structures are cool You re a self-motivated, independent, and high-performance individual. You learn quickly and you enjoy worthy challenges You re a good communicator and an excellent team player You like working in organisations that strive to have a good balance between doing it right and moving quickly This role requires a hybrid work arrangement, with an expectation to be in the office (Bangalore) three days per week. Why Couchbase Modern customer experiences need a flexible cloud database platform that can power applications spanning from cloud to edge and everything in between. Couchbase s mission is to simplify how developers and architects develop, deploy and consume modern applications wherever they are. We have reimagined the database with our fast, flexible and affordable cloud database platform Capella, allowing organizations to quickly build applications that deliver premium experiences to their customers- all with best-in-class price performance. More than 30% of the Fortune 100 trust Couchbase to power their modern applications and build innovative new ones. See our recent awards to learn why Couchbase is a great place to work.We are honored to be a part of the Best Places to Work Award for the Bay Area and the UK . Couchbase offers a total rewards approach to benefits that recognizes the value you create here, so that you in turn may best serve yourself and your family. Some benefits include: Generous Time Off Program - Flexibility to care for you and your family Wellness Benefits - A variety of world class medical plans to choose from, along with dental, vision, life insurance, and employee assistance programs* Financial Planning - RSU equity program*, ESPP program*, Retirement programand Business Travel Insurance Career Growth - Be valued, Create value approach Fun Perks - An ergonomic and comfortable in-office / WFH setup. Food Snacks for in-office employees. And much more! *Note: some programs are not applicable to all countries. Please discuss with a Couchbase recruiter to learn more. Learn more about Couchbase: News and Press Releases Couchbase Capella Couchbase Blog Investors Disclaimer: Couchbase is committed to being an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Join an impact initiative group and experience the amazing feeling of Couchbase can-do culture. By using this website and submitting your information, you acknowledge our Candidate Privacy Notice and understand your personal information may be processed in accordance with our Candidate Privacy Notice following guidelines in your country of application.
Posted 1 week ago
8.0 - 13.0 years
25 - 30 Lacs
Mumbai
Work from Office
About this role Overview: We are looking for an innovative hands-on technology leader and run Global Data Operations for one of the largest global FinTech s. This is a new role that will transform how we manage and process high quality data at scale and reflects our commitment to invest in an Enterprise Data Platform to unlock our data strategy for BlackRock and our Aladdin Client Community. A technology first mindset, to manage and run a modern global data operations function with high levels of automation and engineering, is essential. This role requires a deep understanding of data, domains, and the associated controls. Key responsibilities: The ideal candidate will be a high-energy, technology and data driven individual who has a track record of leading and doing the day to day operations. Ensure on time high quality data delivery with a single pane of glass for data pipeline observability and support Live and breathe best practices of data ops such as culture, processes and technology Partner cross-functionally to enhance existing data sets, eliminating manual inputs and ensuring high quality, and onboarding new data sets Lead change while ensuring daily operational excellence, quality, and control Build and maintain deep alignment with key internal partners on ops tooling and engineering Foster an agile collaborative culture which is creative open, supportive, and dynamic Knowledge and Experience: 8+ years experience in hands-on data operations including data pipeline monitoring and engineering Technical expert including experience with data processing, orchestration (Airflow) data ingestion, cloud-based databases/warehousing (Snowflake) and business intelligence tools The ability to operate and monitor large data sets through the data lifecycle, including the tooling and observability required to be ensure data quality and control at scale Experience implementing, monitoring, and operating data pipelines that are fast, scalable, reliable, and accurate Understanding of modern-day data highways, the associated challenges, and effective controls Passionate about data platforms, data quality and everything data Practical and detailed oriented operations leader Inquisitive leader who will bring new ideas that challenge the status quo Ability to navigate a large, highly matrixed organization Strong presence with clients Bachelor s Degree in Computer Science, Engineering, Mathematics or Statistics Our benefits . Our hybrid work model . About BlackRock . This mission would not be possible without our smartest investment - the one we make in our employees. It s why we re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com / company / blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
3.0 - 6.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Designing technical architecture for IET DIgital utilizing current tools and platforms focused on cloud architecture, cloud build, etc., SPARQ deployment strategy in Azure (IaaS or PaaS) and AWS and applicability across the rest of the enterprise. Designing technical stack for Cordant (IET Digital Flagship Product) including tooling decisions. Enabling enterprise architecture vision and strategy, and support enterprise wide applications and business systems roadmap. Designing servers, storage, security, networks, virtualization/cloud, systems software, tools and governance to meet specified goals. Appying existing technologies, approaches, methodologies in new combinations to design new products, systems or processes. Viewed internally and externally as a specialist in the discipline Leading the definition, design and documentation of technical environments. Deploy solution architectures, conduct analysis of alternative architectures, create architectural standards. Defining processes to ensure conformance with standards, institute solution-testing criteria, and promote a clear and consistent business vision through technical architectures. Planning and delivering legacy infrastructure transformation and migration to drive next-generation business outcomes Driving for Continuous Integration and Continuous Delivery (CI/CD) based application and cloud infrastructure development. Understanding, learning and applying new automated build, test and deployment capabilities and help develop project teams towards integrating such solutions. Collaborating with internal development , external partners and QA teams to help ensure end-to-end quality Fuel your passion To be successful in this role you will: Have a Bachelors degree from an accredited university or college with minimum of 10 additional years of experience in Infrastructure Architecture. Have an Experience working with Linux operating system. Knowledge of hybrid cloud environments. Understanding of microservice design and architectural patterns Have strong expertise in DevOps and CI/CD implementation. Thorough knowledge of cloud-native development Have an Expertise with implementing Keycloak-based IAM solution that supports the integration of enterprise user directories such as LDAP and AD, and/or 3rd-party SSO provider for identity information and applications via standards-based tokens Have an Expertise with implementing Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC) policy creation and enforcement. Have Expertise with SAML 2.0, OpenID Connect and OAuth 2.0 Have an Experience with experience with load balancers such as Apache, Nginx, HAProxy. Familiarity with PKI infrastructure, Certificate authorities, OCSP, CRL, CSRs, x.509 certificate structures, pkcs12 certificate containers Have Experience with microservices architectures. Experience with Microservices architecture components, including Docker and Kubernetes Have an Experience and domain knowledge related to data processing. Experience with DevSecOps, Identity Access Management. Have an Experience with software configuration management tools such as Git/Gitlab. Experience with software development environments and CI/CD tools such as Jenkins
Posted 1 week ago
2.0 - 6.0 years
8 - 12 Lacs
Kochi
Work from Office
> Job Title: Cloud Engineer - AWS, CI/CD & Infrastructure Automation Department: Information Technology / Research Computing Location: Bangalore/Kochi/Pan india Shift: General Job Type: Full-Time Reports To: Director of IT Infrastructure / Head of Research Computing Position Summary: DBiz.ai is seeking a dedicated and technically proficient Cloud Engineer to support our growing cloud infrastructure needs across academic, research, and administrative domains. The ideal candidate will have strong experience with AWS core services , CI/CD pipelines using GitHub , and Infrastructure as Code (IaC) to help modernize and automate our cloud environments. Key Responsibilities: Design, implement, and manage AWS-based cloud infrastructure to support academic and research computing needs. Develop and maintain CI/CD pipelines for deploying applications and services using GitHub Actions or similar tools. Automate infrastructure provisioning and configuration using IaC tools such as Terraform or AWS CloudFormation. Design and implement solutions using AWS Machine Learning (SageMaker, Bedrock), data analytics (Redshift), and data processing tools (Glue, Step Functions) to support automation and intelligent decision-making Collaborate with faculty, researchers, and IT staff to support cloud-based research workflows and data pipelines. Ensure cloud environments are secure, scalable, and cost-effective. Monitor system performance and troubleshoot issues related to cloud infrastructure and deployments. Document cloud architecture, workflows, and best practices for internal knowledge sharing and compliance. Required Qualifications: Bachelor s degree in Computer Science, Information Technology, or a related field. Strong experience with AWS core services (EC2, S3, IAM, VPC, Lambda, CloudWatch, etc.). Proficiency in GitHub and building CI/CD pipelines . Hands-on experience with Infrastructure as Code tools (Terraform, CloudFormation, etc.). Familiarity with scripting languages (e.g., Python, Bash). Exposure to AWS Machine Learning services (e.g., SageMaker, Bedrock), Data Analytics tools (e.g., Redshift), and Data Processing and Orchestration services (e.g., Glue, Step Functions). Strong understanding of cloud security, networking, and architecture principles. Excellent communication and collaboration skills, especially in academic or research settings. Preferred Qualifications: AWS Certification (e.g., AWS Certified Solutions Architect - Associate). Experience supporting research computing environments or academic IT infrastructure . Familiarity with containerization (Docker, Kubernetes) and hybrid cloud environments. Experience working in a university or public sector environment.
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Gurugram
Work from Office
At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we re supporting our customers financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what s possible and we re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role There are hundreds of opportunities to make your mark on technology and life at American Express. Heres just some of what youll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. 5+ years of hands-on software development experience with Big Data Analytics solutions Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Understanding of distributed (multi-tiered) systems, data structures, algorithms Design Patterns. Strong Object-Oriented Programming skills and design patterns. Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Good knowledge and experience with configuration management tools like GitHub Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Looks proactively beyond the obvious for continuous improvement opportunities. Communicates effectively with product and cross functional team.
Posted 1 week ago
3.0 - 5.0 years
1 - 5 Lacs
Hyderabad
Work from Office
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients needs and exceeding their expectations. Your Teams Impact Our clients are increasingly seeking ways to optimize the integration and maintenance of their workflows to free up time, reduce operational costs, and minimize personnel risk. FactSets Managed Services, leveraging our middle office solutions, allowing users to navigate complex workflows, ensure data quality, and access actionable. What Youll Do Deliver technical effort estimates to the analytics team and other business stakeholders when planning new feature and updating existing implementations. Partner with other internal teams to design/improve efficiency and optimize current processes. Consistently engage and address client needs while serving as a primary point of contact. Use Python and SQL to design and implement scalable data processing solutions. Experience in the design, development, and code review of SQL and Python scripts for aggregating and visualizing complex datasets. Demonstratable technical expertise with the following: Databases: SQL Server, PostgreSQL Scripting and exploration: Python, SQL Visualization: Power BI Work with cloud platforms to deploy and maintain data solutions (AWS, Snowflake). Lead a team of minimum 3-5 members. What Were Looking For Required Skills Bachelors degree in engineering with specialization in Computer Science, IT, or Electronics. 3- 5 years of relevant experience with preferably 1-2 years leading a team Maintain a working knowledge of diverse financial concepts, including bonds, equities, and similar assets. Understanding of both business and technical requirements, and the ability to serve as a conduit between business and technology teams. Ability to prioritize multiple projects and work independently while managing the team. Flexible to work in a hybrid model. Whats In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an SP 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the SP 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 week ago
1.0 - 2.0 years
8 - 9 Lacs
Hyderabad
Work from Office
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients needs and exceeding their expectations. Your Teams Impact Our clients are increasingly seeking ways to optimize the integration and maintenance of their workflows to free up time, reduce operational costs, and minimize personnel risk. FactSets Managed Services, leveraging our middle office solutions, allowing users to navigate complex workflows, ensure data quality, and access actionable What Youll Do Provide input on technical effort estimates for new features and updates, collaborating with analytics and business teams. Work in a team of 3-5 members to maintain/improve efficiency and optimize current processes. Use Python and SQL to design and implement scalable data processing solutions. Design and peer review SQL and Python scripts used for aggregating and visualizing complex data. Strong technical knowledge with the following: Databases: SQL Server, PostgreSQL Scripting and exploration: Python, SQL Visualization: Power BI Work with cloud platforms to deploy and maintain data solutions (AWS, Snowflake). What Were Looking For Bachelors degree in engineering with specialization in Computer Science, IT, or Electronics. A minimum of 1-2 years of relevant experience. Working knowledge of diverse financial concepts, including bonds, equities, and similar assets. Ability to prioritize multiple projects and work independently while reporting to the team. Flexible to work in a hybrid model. Whats In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an SP 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the SP 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 week ago
1.0 - 2.0 years
12 - 14 Lacs
Hyderabad
Work from Office
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate , serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients needs and exceeding their expectations. Your Teams Impact Our clients are increasingly seeking ways to optimize the integration and maintenance of their workflows to free up time, reduce operational costs, and minimize personnel risk. FactSets Managed Services, leveraging our middle office solutions, allowing users to navigate complex workflows, ensure data quality, and access actionable What Youll Do Provide input on technical effort estimates for new features and updates, collaborating with analytics and business teams. Work in a team of 3-5 members to maintain/improve efficiency and optimize current processes. Use Python and SQL to design and implement scalable data processing solutions. Design and peer review SQL and Python scripts used for aggregating and visualizing complex data. Strong technical knowledge with the following: Databases: SQL Server, PostgreSQL Scripting and exploration: Python, SQL Visualization: Power BI Work with cloud platforms to deploy and maintain data solutions (AWS, Snowflake). What Were Looking For Bachelors degree in engineering with specialization in Computer Science, IT, or Electronics. A minimum of 1-2 years of relevant experience. Working knowledge of diverse financial concepts, including bonds, equities, and similar assets. Ability to prioritize multiple projects and work independently while reporting to the team. Flexible to work in a hybrid model. Whats In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an SP 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet ( NYSE:FDS | NASDAQ:FDS ) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the SP 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 1 week ago
5.0 - 10.0 years
50 - 60 Lacs
Bengaluru
Work from Office
Context : In modern banking age financial institutions need to bring Classical Data Drivers and Evolving Business Drivers together in a single platform. These drivers also need to communicate with each other and share the data products for enterprise consumption. Traditional data platforms are able to handle classical data drivers well but fail to communicate with evolving business drivers due to limitations of technologies and implementation approaches. Modern Data Platform helps to fill this gap, and take the business to the next level of growth and expansion using data driven approaches. The technology transformation in recent years make such implementations feasible. Your Opportunity You will be responsible for leading the Modern Data Platform Practice, that would involve providing solutions to customers on Tradition Datawarehouses, Big Data Platforms on Prem and Cloud. It would cover aspects of Architecting the Data Platforms, defining Data engineering design, choosing appropriate technology and tools across on-Prem and Cloud services. Help the organization to strengthen the Modern Data Platform capabilities, lead the Pre-sales discussion on data platforms, provide the technology architecture in the RFP responses and lead the technology POC/MVP. Your Qualifications: We expect you to have following qualifications and experiences to be able to effectively perform the suggested role: A Technology leader with an engineering academic background in Computer Science / Information Technology / Data Technologies [BE/BTech/MCA] Overall 5-10 years of Data Engineering and analytics experience as individual contributor as well as Technology / Architecture lead A minimum of 3-5 years of hands-on experience in Big Data systems across On-Prem and Cloud environments Should have led Data Platform architecture design projects for a mid to large size firms Have experience of implementing Batch Data and Streaming / Online data integrations using 3rd party tools and custom programs A good hands-on experience on SQL and one of the programming language: Core Java / Scala / Python. A good hands-on experience in Kafka for enabling Event driven data pipes / processing Knowledge of leading Data Sevices offered by AWS, Azure, Snowflake, Confluent Thorough understanding on distributed computing and related data structures Should have implemented Data Governance and Quality capabilities for a Data Platform (for On-Prem and or Cloud ) Good analytical skills and presentation skills Experience in building the team from the ground up Good exposure of leading RDBMS technologies and Data Visualization platforms Should have demonstrated AI/ML models for Data Processing and generating Insights for end users Good teammate and ability to work on own initiatives with minimal direction An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions.
Posted 1 week ago
2.0 - 7.0 years
12 - 13 Lacs
Bengaluru
Work from Office
Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category: Engineering Technology Experience Level: Experienced Hire At Moodys, we unite the brightest minds to turn today s risks into tomorrow s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Skills and Competencies Proficiency in Kubernetes and Amazon EKS (2+ years required): Essential for managing containerized applications and ensuring high availability and security in cloud-native environments. Strong expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to build scalable and cost-efficient solutions. Hands-on experience with Terraform (2+ years required): Critical for managing Infrastructure as Code (IaC) across multiple environments, ensuring consistency and repeatability. CI/CD pipeline development using GitHub Actions (required): Necessary for automating deployments and supporting agile development practices. Scripting skills in Python, Bash, or PowerShell (required): Enables automation of operational tasks and enhances infrastructure management capabilities. Experience with Databricks and Apache Kafka (preferred): Valuable for teams working with data pipelines, MLOps workflows, and event-driven architectures. Education Bachelor s degree in Computer Science or equivalent experience Responsibilities Design, automate, and manage scalable cloud infrastructure using Kubernetes, AWS, Terraform, and CI/CD pipelines . Design and manage cloud-native infrastructure using container orchestration platforms, ensuring high availability, scalability, and security across environments. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform to provision and manage multi-environment cloud resources consistently and efficiently. Develop and optimize continuous integration and delivery (CI/CD) pipelines to automate application and infrastructure deployments, supporting agile development cycles. Monitor system performance and reliability by configuring observability tools for logging, alerting, and metrics collection, and proactively address operational issues. Collaborate with cross-functional teams to align infrastructure solutions with application requirements, ensuring seamless deployment and performance optimization. Document technical processes and architectural decisions through runbooks, diagrams, and knowledge-sharing resources to support operational continuity and team onboarding. About the team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We contribute to Moody s by: Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications. By joining our team, you will be part of exciting work in cloud-native DevOps, data engineering, and platform automation, supporting global data operations across 29 countries and contributing to Moody s mission of delivering integrated perspectives on risk and growth.
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Pune
Work from Office
Title/Position: Application Support Engineer-Power Tools Job Location: Pune /Gurugram Experience: 5+ Years Employment Type: Full Time Shift Timings: Rotational Shift Job Description: We are seeking a highly skilled Power BI Engineer to join our technical support team. The successful candidate will be responsible for designing and developing interactive dashboards that provide actionable insights. They will collaborate with stakeholders to gather requirements, optimize data models, and ensure data accuracy. Additionally, the role involves integrating Power BI with various data sources, conducting rigorous testing, and providing training and support as required. The ideal candidate will have a working knowledge of power tolls and databases.. Responsibilities: Design, develop, and maintain Power BI dashboards and reports to provide actionable insights. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Optimize data models and queries for performance and scalability. Integrate Power BI with various data sources, including SQL databases, Excel, and cloud services. Utilize Power Automate to automate workflows and improve data processing efficiency. Ensure data accuracy and integrity through rigorous testing and validation. Provide training and support to end-users on Power BI and Power Automate functionalities. Stay updated with the latest Power BI and Power Automate features and best practices. Requirements: Bachelors degree in Computer Science, Information Systems, or a related field. Proven experience as a Power BI Developer or similar role. Strong proficiency in Power BI, including DAX and Power Query. Good working knowledge of Power Automate for workflow automation. Experience with data visualization and creating interactive dashboards. Familiarity with SQL and data warehousing concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. About Stratacent Stratacent is an IT Consulting and Services firm, headquartered in Jersey City, NJ, with two global delivery centres in New York City area and New Delhi area plus offices in London, Canada and Pune, India. We are a leading IT services provider focusing in Financial Services, Insurance, Healthcare and Life Sciences. We help our customers in their digital transformation journey and provides services/ solutions around Cloud Infrastructure, Data and Analytics, Automation, Application Development and ITSM. We have partnerships with SAS, Automation Anywhere, Snowflake, Azure, AWS and GCP. (To learn more: www.stratacent.com ). Employee Benefits: Group Medical Insurance Cab facility Meals/snacks Continuous Learning Program Stratacent India Private Limited is an equal opportunity employer and will not discriminate against any employee or applicant for employment on the basis of race, color, creed, religion, age, sex, national origin, ancestry, handicap, or any other factors. ",
Posted 1 week ago
5.0 - 10.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Location- Hyderabad Job model - Hybrid Purpose of the job We are looking for a proactive and customer-focused Data Scientist to support logistics operations and enhance the customer experience through advanced track and trace analytics. This role will focus on integrating and analyzing shipment data from TransVoyant s platform and external logistics service providers (LSPs) to ensure accurate, timely, and actionable visibility into supply chain movements. At dsm-firmenich, being a force for good is not optional. Diversity, Equity Inclusion is a shared responsibility woven into our daily work to not only benefit our People, Cusotmers Communities but also drive business value. Equal access to opportunities is a given, belonging is a shared feeling, authencity is celebrated. Key Responsibilities Develop and maintain data models and analytics to support real-time shipment tracking and traceability. Integrate and harmonize data from TransVoyant and third-party LSP platforms (e.g., carriers, freight forwarders, 3PLs). Identify and resolve data gaps or inconsistencies that impact shipment visibility and customer experience. Collaborate with logistics operations, customer service, and business units teams to define KPIs and improve logistics track trace workflows. Build predictive reports and dashboards to anticipate delays, disruptions, or exceptions in the supply chain. Create dashboards and visualizations to communicate shipment status, trends, and performance metrics.Support root cause analysis and continuous improvement initiatives related to logistics performance. We bring The opportunity to work for a company where sustainability is much more than a claim and is core to our strategy and purpose A flexible work environment that empowers people to take accountability for their work and own the outcome An eagerness to be one team and learn from each other to bring progress to life and create a better future Barrier-free communities within our organization where every employee is equally valued and respected - regardless of their background, beliefs, or identity A culture that prioritizes safety and well-being, both physically and mentally A space to grow by encouraging and supporting curiosity and an open mindset You bring Bachelor s or Master s degree in Data Science, Supply Chain Management, Computer Science, or a related field.5+ years of experience in data science or analytics roles, preferably in logistics or transportation.Strong knowledge o f shipment tracking systems, EDI/API integrations, and logistics data standards. Proficiency in Python, SQL, and data visualization tools (e.g., Power BI, Tableau).Experience working with real-time or near-real-time data streams. Excellent problem-solving skills and a customer-centric mindset.Experience with real-time data processing and streaming technologies Comfortable using Microsoft Excel to analyze and summarize data, including graphs, charts, formulas, functions, solvers, and regression analysis. Ability to work in a fast-paced environment while managing multiple projects/priorities simultaneously.Excellent attention to detail, problem-solving, organizational and prioritization skills, Strong communication skills to collaborate effectively with cross-functional teams and present findings to both technical and non-technical audiences.Knowledge of supply chain, transportation and logistics operations is a plus. Familiarity with platforms such as TransVoyant, FourKites, Project44, or similar Logistics Service Providers visibility tools .Experience with machine learning models for ETA prediction or anomaly detection.Understanding of global transportation modes (ocean, air, road, rail) and logistics operations The application process Interested in this positionPlease apply on-line by uploading your resume in English via our career portal. For . We aim to build a workplace where opportunity really is equal, so everyone can thrive. We do not discriminate: theres a place for everyone at dsm-firmenich. As a committed equal opportunity employer, we ensure our recruitment practices are inclusive and fair. We encourage the recruitment of a diverse worforce, representative of the communities in which we work, by using inclusive language, diverse interview panels, diversified sourcing strategies. Selection is based on qualifications, competency, experience, performance history and fit with the team to advance fair and equitable opportunity. Employment decisions are based upon job-related reasons regardless of an applicants race, color, ethnicity,national origin, religion, gender, gender identity or expression, sexual orientation, age, disability, backgrounds ,genetic information, protected veteran status, or any other status protected by law. We are committed to providing reasonable support for disabled applicants in our recruiting process.Should you need assistance , and are comfortable to share this, please let us know. About dsm-firmenich As innovators in nutrition, health, and beauty, dsm-firmenich reinvents, manufactures, and combines vital nutrients, flavors, and fragrances for the world s growing population to thrive. With our comprehensive range of solutions, with natural and renewable ingredients and renowned science and technology capabilities, we work to create what is essential for life, desirable for consumers, and more sustainable for the planet. dsm-firmenich is a Swiss-Dutch company, listed on the Euronext Amsterdam, with operations in almost 60 countries and revenues of more than 12 billion. With a diverse, worldwide team of nearly 30,000 employees, we bring progress to life every day, everywhere, for billions of people. Agency Statement Please note this is a direct search led by dsm-firmenich. We only accept applications from candidates, not from agencies nor subject to agency s fees, percentages or similar
Posted 1 week ago
2.0 - 4.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Cloud Developer This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: In the HPE Hybrid Cloud , we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us redefine what s next for you. What youll do: Understanding on any one cloud (AWS/GCP/Azure).Job Family Definition: The Cloud Developer builds from the ground up to meet the needs of mission-critical applications, and is always looking for innovative approaches to deliver end-to-end technical solutions to solve customer problems. Brings technical thinking to break down complex data and to engineer new ideas and methods for solving, prototyping, designing, and implementing cloud-based solutions. Collaborates with project managers and development partners to ensure effective and efficient delivery, deployment, operation, monitoring, and support of Cloud engagements. The Cloud Developer provides business value expertise to drive the development of innovative service offerings that enrich HPEs Cloud Services portfolio across multiple systems, platforms, and applications. Management Level Definition: Contributes to assignments of limited scope by applying technical concepts and theoretical knowledge acquired through specialized training, education, or previous experience. Acts as team member by providing information, analysis and recommendations in support of team efforts. Exercises independent judgment within defined parameters. Responsibilities: Develops and maintains cloud application modules per feature specifications, adhering to security policies. Designs test plans and executes and automates test cases for assigned portions of the application. Deploys code and debugs issues. Shares and reviews innovative technical ideas with peers, high-level technical contributors, technical writers, and managers. Analyses science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you need to bring: Bachelors degree in computer science, engineering, information systems, or closely related quantitative discipline. Master s desirable. Typically, 2-4 years experience. Knowledge and Skills: Programming skills in Python, Java, Golang, or JavaScript. Understanding of basic testing, coding, and debugging procedures. Ability to quickly learn new skills and technologies and work well with other team members. Good written and verbal communication skills. Knowledge on kubernetes. Understanding on any one cloud (AWS/GCP/Azure). Understanding of observability tools (grafana, prometheus, humio, alertmanager). Understanding DevOps practices like continuous integration/continuous deployment (CI/CD). Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #hybridcloud Job: Engineering Job Level: TCP_01 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
3.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
You ll be our: Lab Shift Incharge ( PRD Testing) You ll be based at: Ather Energy Ltd ( Proto Lab) You ll be aligned with: PRD Engineer, Functional Attributes Testing You ll be joining our: Product Validation What you ll do at Ather: As a PRD Shift Incharge, you will be responsible for Responsible for Handling Test lab shift operations where he will be responsible for ensuring the shift operations Responsible for managing the Lab technicians to ensure that the necessary result is achieved. in charge of lab rigs like chassis dyno, transmission dynos, chamber activity, and maintenance. He will be responsible for installation and commission of test rigs like chassis dynos, climatic chamber, transmission rigs etc.. Responsible for execution of test cases for PRD attributes. Responsible for All the service and maintenance of equipment/rigs. Work closely with PRD engineers for the attributes testing and ensuring accuracy of the same Raising bugs and highlighting the critical failures in the system/vehicle immediately Responsible for proper usage of equipments/Instruments and machineries Adhering to test standards/procedure suit the testing needs Responsible for creating quality test reports with relevant information Here s what we re looking for: Proven experience in dyno operations and its controls especially from the automotive industry. Strong understanding of electric vehicle performance Proven experience of working in instrumentation, data acquisition and data processing Basic Data analysis skills Good communication skills Excellent data visualization skills Should be willing to come in 3 shifts What you bring to Ather: Diploma or Bachelor s in Mechanical or Electrical Engineering or any other equivalent field. 3 to 5 years automotive lab testing work experience ,\
Posted 1 week ago
2.0 - 4.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Cloud Developer This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : In the HPE Hybrid Cloud , we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us redefine what s next for you. What youll do: Understanding on any one cloud (AWS/GCP/Azure).Job Family Definition: The Cloud Developer builds from the ground up to meet the needs of mission-critical applications, and is always looking for innovative approaches to deliver end-to-end technical solutions to solve customer problems. Brings technical thinking to break down complex data and to engineer new ideas and methods for solving, prototyping, designing, and implementing cloud-based solutions. Collaborates with project managers and development partners to ensure effective and efficient delivery, deployment, operation, monitoring, and support of Cloud engagements. The Cloud Developer provides business value expertise to drive the development of innovative service offerings that enrich HPEs Cloud Services portfolio across multiple systems, platforms, and applications. Management Level Definition: Contributes to assignments of limited scope by applying technical concepts and theoretical knowledge acquired through specialized training, education, or previous experience. Acts as team member by providing information, analysis and recommendations in support of team efforts. Exercises independent judgment within defined parameters. Responsibilities: Develops and maintains cloud application modules per feature specifications, adhering to security policies. Designs test plans and executes and automates test cases for assigned portions of the application. Deploys code and debugs issues. Shares and reviews innovative technical ideas with peers, high-level technical contributors, technical writers, and managers. Analyses science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you need to bring: Bachelors degree in computer science, engineering, information systems, or closely related quantitative discipline. Master s desirable. Typically, 2-4 years experience. Knowledge and Skills: Programming skills in Python, Java, Golang, or JavaScript. Understanding of basic testing, coding, and debugging procedures. Ability to quickly learn new skills and technologies and work well with other team members. Good written and verbal communication skills. Knowledge on kubernetes. Understanding on any one cloud (AWS/GCP/Azure). Understanding of observability tools (grafana, prometheus, humio, alertmanager). Understanding DevOps practices like continuous integration/continuous deployment (CI/CD). Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #hybridcloud Job: Engineering Job Level: TCP_01 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
4.0 - 7.0 years
6 - 7 Lacs
Bengaluru
Work from Office
Embedded Network Engineer This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the Intelligent Edge - and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what s next for you. Management Level Definition: Contributions include applying developed subject matter expertise to solve common and sometimes complex technical problems and recommending alternatives where necessary. Might act as project lead and provide assistance to lower level professionals. Exercises independent judgment and consults with others to determine best method for accomplishing work and achieving objectives. What you ll do: Analyses the feature specifications and determines the required coding, testing, and integration activities. Designs and develops moderate to complex cloud application modules per feature specifications adhering to security policies. Identifies debugs and creates solutions for issues with code and integration into application architecture. Develops and executes comprehensive test plans for features adhering to performance, scale, usability, and security requirements. Deploy cloud-based systems and applications code using continuous integration/deployment (CI/CD) pipelines to automate cloud applications management, scaling, and deployment. Contributes towards innovation and integration of new technologies into projects. Analyzes science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you need to bring: Education and Experience Required: Bachelor s degree in computer science, engineering, information systems, or closely related quantitative discipline. Master s desirable. Typically, 4-7 years experience. Knowledge and Skills: Strong programming skills in C programming. In-depth understanding of L2/L3 Protocols , Routing Protocols , Multicast Protocols , and hands-on Routing and Switching experience . Good understanding of distributed systems, event-driven programming paradigms, and designing for scale and performance. Experience with cloud-native applications, developer tools, managed services, and next-generation databases. Knowledge of DevOps practices like CI/CD, infrastructure as code, containerization, and orchestration using Kubernetes. Good written and verbal communication skills and agile in a changing environment Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job: Engineering Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
7.0 - 12.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Automation NoSQL Data Engineer This role has been designed as Onsite with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In today s fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine what s next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Cloud QA Automation Testing This role has been designed as Onsite with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today s complex world. Our culture thrives on finding new and better ways to accelerate what s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. : Aruba is an HPE Company, and a leading provider of next-generation network access solutions for the mobile enterprise. Helping some of the largest companies in the world modernize their networks to meet the demands of a digital future, Aruba is redefining the Intelligent Edge - and creating new customer experiences across intelligent spaces and digital workspaces. Join us redefine what s next for you. What you ll do: Automate test moderate cloud application features as per specifications. Automate test cloud application modules adhering to security policies. Designs test plans, develops, executes, and automates test cases for assigned portions of the developed code. Deploys code and troubleshoots issues in application modules and deployment environment. Shares and reviews innovative technical ideas with peers, high-level technical contributors, technical writers, and managers. Analyses science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you need to bring: Education and Experience Required: Bachelors degree in computer science, engineering, information systems, or closely related quantitative discipline. Master s desirable. Typically, 5-10 years experience. Knowledge and Skills: Mandatory Skills: Python and Selenium or Cypress or Java programing automation Domain: Cloud , Networking Good written and verbal communication skills. Ability to quickly learn new skills and technologies and work well with other team members. Understanding DevOps practices like continuous integration/deployment and orchestration with Kubernetes. Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Lets Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #aruba Job: Engineering Job Level: TCP_02 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity . Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
Posted 1 week ago
5.0 - 10.0 years
13 - 18 Lacs
Vadodara
Work from Office
Internal Job Title: Data Analytics Development Lead Business: Lucy Electric Location: Halol, Vadodara, Gujarat, #LI-HYBRID Job Reference Number: 4076 Job Purpose Primary point of contact for data engineering, analysis, reporting, and management information from ERP systems and other sources. Maintain and enhance KPIs, metrics, and dashboards delivering actionable insights into business operations to drive continuous improvement. Support multiple business units by enabling comparisons and identifying opportunities for process enhancement. Engage a wide range of stakeholders to lead activities using Microsoft Power Platform, with a focus on Power BI, to ensure business requirements are met. Contribute to the functional roadmap to align data, reporting, AI and analytics capabilities in the short, medium, and long term. Job Context Working closely with the Data Analytics Solutions Architect and cross-functional teams to ensure a coordinated approach to Business Intelligence delivery in alignment with business priorities and goals Act as the Data Platform Subject Matter Expert to support the team in advancing processes for agile development, metadata definition, business logic coding, data modelling, unit testing and data product delivery in line with the functional roadmap Job Dimensions The role is a hybrid role, with flexible attendance at our office in Vadodara, India, to support business engagement There is an occasional need to visit other sites and business partners at their premises to build stakeholder relationships or to attend specific industry events, globally Key Accountabilities These will include: Analyzing complex data sets to uncover trends, patterns, and actionable insights that drive business effectiveness and operational efficiency Collaborating remotely with cross-functional stakeholders across different countries, to confirm business requirements and translate them into analytical solutions Overseeing the end-to-end data lifecycle, including data collection, cleaning, validation and warehousing, ensuring high data quality and integrity Carrying out agile backlog management (CI/CD) and coordinating design reviews against best practice guidelines, with change control and user acceptance testing (UAT) Collaborating with the wider business to promote appropriate use of data analytics tools through co-ordinated communications Delivering training and coaching sessions to enhance data literacy and empower business users to make data-driven decisions Leading activities according to the the analytics roadmap resolving issues, identifying opportunities, and defining clear success metrics Supporting the Solutions Architects to foster a strong data culture and ensuring analytics input is embedded in the evaluation and prioritisation of new initiatives Troubleshooting production issues and coordinating with others to resolve incidents and complete tasks using IT Service Management (ITSM) tools Qualifications, Experience Skills A bachelor s degree (or equivalent professional qualifications and experience) in a relevant stream Effective communication skills in the global Business Language, English 5+ years experience in a business analytics or data-driven role using BI tools, preferably Power BI/Fabric, with at least 2 years in a leadership capacity demonstrating strong team management skills Capability to de-construct existing reports, validate data, and guide a small team to design and implement BI solutions Good understanding of handling multiple data sources, such MS SQL, Dataverse, M365, Azure data services Familiarity with Microsoft Dynamics 365 applications or equivalent enterprise-level finance, supply chain operations, customer service and sales business software A keen investigative mindset for identifying process improvement opportunities through data analysis, providing recommendations for automation and optimisation Experience in creating well-formed supporting documentation A proactive approach to meet service levels for Business as Usual (BAU) support and Ad-Hoc reporting needs, while working on Projects and Agile Workstreams at the same time A general understanding of a company s value chain and basic manufacturing industry terminology Good to Have Skills: ETL/ELT toolsets, Data Lake / One Lake, DAX, Python, T-SQL, C#, REST APIs Azure DevOps with multistage pipelines, source/version control, GIT Microsoft Power Platform and Fabric Administration Dynamics 365 accreditation or similar ERP functional qualification Data Governance tools and principles General AI understanding, Microsoft Copilot, Machine Learning (ML) frameworks, Near Time and Real Time data processing with large datasets Behavioral Competencies Capable people and performance manager, with excellent communication and interpersonal skills Process change adopter, through positive stakeholder relationship management with internal and external parties Customer-oriented problem solver, with desire to share knowledge and support others, demonstrating active listening and empathy towards their views and concerns Business focused innovative thinker, able to adapt and achieve collaborative outcomes in a global culture, working with remote support teams Lucy Group Ltd is the parent company of all Lucy Group companies. Since its origins in Oxford, UK, over 200 years ago, the Group has grown and diversified. The Group s businesses help to advance the transition to a carbon-free world with infrastructure that enables renewable energy, electric vehicles, smart city management and sustainable living. Today we employ in excess of 1,600 people worldwide, with operations in the UK, Saudi Arabia, UAE, India, South Africa, Brazil, Thailand, Malaysia, India and East Africa. Lucy Electric is an international leader in intelligent secondary power distribution products and solutions, with remote operation and monitoring. Linking energy generation to consumption, the business specialises in high-performance medium- and low-voltage switchgear for utility, industrial and commercial applications. Key products include Ring Main Units and package substations. Does this sound interestingWe would love to hear from you. Our application process in quick and easy. Apply today!
Posted 1 week ago
2.0 - 7.0 years
8 - 12 Lacs
Pune, Gurugram
Work from Office
We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models, in particular, to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge LLM applications and systems for our clients. Your primary focus will be on driving innovation and leveraging LLMs to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of LLM apps. Job Description: Responsibilities : Develop and extend digital products and creative applications, leveraging LLM technologies at their core. Lead a product development and product operations team to further develop, enhance and extend existing digital products built on top of LLMs Lead client onboarding, client rollout, and client adoption efforts , maximizing use of the product across multiple clients Lead enhancements and extensions for client specific capabilities and requests Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs Ability to work with Base Foundation LLM Models, Fine Tuned LLM models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the companys technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Large Language Models Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including Azure OpenAI GPT model family primarily, but also Google Gemini, Anthropic Claude, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI . Solid Hands-on, and Deep Experience working with RAG pipelines and Enterprise technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines . Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience with LLM evaluation frameworks such as Ragas, and their use to evaluate / improve LLM model outputs Experience in data preprocessing , and post-processing model / results evaluation. Hands-on Experience with API Integration and orchestration across multiple platforms Good Experience with Workflow Builders and Low-Code Workflow Builder tools such as Azure Logic Apps , or n8n (Nodemation) Good Experience with Serverless Cloud Applications , including Cloud / Serverless Functions with Azure Good Experience with Automation Workflows and building Automation solutions to facilitate rapid onboarding for digital products Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns , built on LLMs and Diffusion models. Nice-to-Have Skills (not essential, but useful) : Good understanding of Transformer Models and how they work. Hands-on Experience with Fine-Tuning LLM models at scale. Good Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents . Some experience with Single-Agent and Multi-Agent Orchestration solutions Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant
Posted 1 week ago
3.0 - 5.0 years
6 - 9 Lacs
Bengaluru
Work from Office
About GalaxEye Space GalaxEye is pioneering the next generation of Earth Observation (EO) by leveraging advanced SAR and MSI sensors to provide high-resolution satellite imagery and actionable insights. Our mission is to revolutionize geospatial intelligence for industries like agriculture, defense, and disaster management. Role Overview We are hiring a Python Cloud Developer with a SaaS background to develop cloud-native applications for our satellite analytics platform. You will be responsible for building scalable backend services, optimizing cloud deployments, and ensuring seamless integration with AI-driven geospatial models. Key Responsibilities Develop cloud-native Python applications for satellite data processing. Design and deploy serverless architectures on AWS, GCP, or Azure. Optimize and scale backend services handling large geospatial datasets. Integrate APIs and microservices with front-end applications. Implement security best practices for cloud-based SaaS solutions. Work with data scientists to integrate AI/ML models into production. Automate deployment and monitoring using CI/CD pipelines. Requirements 3-5 years of experience in Python development on cloud. Strong understanding of SaaS architectures and cloud-native development. Hands-on experience with FastAPI, Flask, or Django. Experience working with AWS Lambda, API Gateway, and DynamoDB. Knowledge of Kubernetes, Docker, and Terraform. Experience handling high-throughput data processing. Strong problem-solving and debugging skills. Benefits Fair compensation will be provided as per market standards Experience rapid growth and start-up culture Flexible Working Hours Open to explore, discuss and implement new ideas and processes Opportunity to work closely with the Founding Team at GalaxEye Get a chance to work with Advisors holding senior positions and decades of experience
Posted 1 week ago
2.0 - 7.0 years
6 - 10 Lacs
Pune, Gurugram
Work from Office
We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models, in particular, to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge LLM applications and systems for our clients. Your primary focus will be on driving innovation and leveraging LLMs to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of LLM apps. Job Description: Responsibilities : Develop and extend digital products and creative applications, leveraging LLM technologies at their core. Lead a product development and product operations team to further develop, enhance and extend existing digital products built on top of LLMs Lead client onboarding, client rollout, and client adoption efforts , maximizing use of the product across multiple clients Lead enhancements and extensions for client specific capabilities and requests Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs Ability to work with Base Foundation LLM Models, Fine Tuned LLM models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the companys technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Large Language Models Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including Azure OpenAI GPT model family primarily, but also Google Gemini, Anthropic Claude, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI . Solid Hands-on, and Deep Experience working with RAG pipelines and Enterprise technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines . Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience with LLM evaluation frameworks such as Ragas, and their use to evaluate / improve LLM model outputs Experience in data preprocessing , and post-processing model / results evaluation. Hands-on Experience with API Integration and orchestration across multiple platforms Good Experience with Workflow Builders and Low-Code Workflow Builder tools such as Azure Logic Apps , or n8n (Nodemation) Good Experience with Serverless Cloud Applications , including Cloud / Serverless Functions with Azure Good Experience with Automation Workflows and building Automation solutions to facilitate rapid onboarding for digital products Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns , built on LLMs and Diffusion models. Nice-to-Have Skills (not essential, but useful) : Good understanding of Transformer Models and how they work. Hands-on Experience with Fine-Tuning LLM models at scale. Good Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents . Some experience with Single-Agent and Multi-Agent Orchestration solutions Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant
Posted 1 week ago
6.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
" Senior Data Engineer (Contract) Location: Bengaluru, Karnataka, India About the Role: Were looking for an experienced Senior Data Engineer (6-8 years) to join our data team. Youll be key in building and maintaining our data systems on AWS. Youll use your strong skills in big data tools and cloud technology to help our analytics team get valuable insights from our data. Youll be in charge of the whole process of our data pipelines, making sure the data is good, reliable, and fast. What Youll Do: Design and build efficient data pipelines using Spark / PySpark / Scala . Manage complex data processes with Airflow , creating and fixing any issues with the workflows ( DAGs ). Clean, transform, and prepare data for analysis. Use Python for data tasks, automation, and building tools. Work with AWS services like S3, Redshift, EMR, Glue, and Athena to manage our data infrastructure. Collaborate closely with the Analytics team to understand what data they need and provide solutions. Help develop and maintain our Node.js backend, using Typescript , for data services. Use YAML to manage the settings for our data tools. Set up and manage automated deployment processes ( CI/CD ) using GitHub Actions . Monitor and fix problems in our data pipelines to keep them running smoothly. Implement checks to ensure our data is accurate and consistent. Help design and build data warehouses and data lakes. Use SQL extensively to query and work with data in different systems. Work with streaming data using technologies like Kafka for real-time data processing. Stay updated on the latest data engineering technologies. Guide and mentor junior data engineers. Help create data management rules and procedures. What Youll Need: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 6-8 years of experience as a Data Engineer. Strong skills in Spark and Scala for handling large amounts of data. Good experience with Airflow for managing data workflows and understanding DAGs . Solid understanding of how to transform and prepare data. Strong programming skills in Python for data tasks and automation.. Proven experience working with AWS cloud services (S3, Redshift, EMR, Glue, IAM, EC2, and Athena ). Experience building data solutions for Analytics teams. Familiarity with Node.js for backend development. Experience with Typescript for backend development is a plus. Experience using YAML for configuration management. Hands-on experience with GitHub Actions for automated deployment ( CI/CD ). Good understanding of data warehousing concepts. Strong database skills - OLAP/OLTP Excellent command of SQL for data querying and manipulation. Experience with stream processing using Kafka or similar technologies. Excellent problem-solving, analytical, and communication skills. Ability to work well independently and as part of a team. Bonus Points: Familiarity with data lake technologies (e.g., Delta Lake, Apache Iceberg). Experience with other stream processing technologies (e.g., Flink, Kinesis). Knowledge of data management, data quality, statistics and data governance frameworks. Experience with tools for managing infrastructure as code (e.g., Terraform). Familiarity with container technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana).
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France