Jobs
Interviews

61 Data Warehouses Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Data Lake Developer at BCSS, you will play a crucial role in the advancement and expansion of our advanced analytics practice. Your primary responsibility will involve designing and developing data lakes, managing data flows, and integrating information from diverse sources into a unified data lake platform through an ETL Tool. Furthermore, you will code and oversee delta lake implementations on S3 utilizing technologies such as Databricks or Apache Hoodie. It will be your responsibility to triage, debug, and resolve technical issues related to Data Lakes, as well as design and develop data warehouses for scalability. Evaluating data models, designing data access patterns, and coordinating with business and technical teams throughout the software development life cycle are also key aspects of this role. Additionally, you will participate in making significant technical and architectural decisions, as well as maintain and manage code repositories like Git. To excel in this role, you must possess at least 5 years of experience operating on AWS Cloud with a focus on building Data Lake architectures. You should also have a minimum of 3 years of experience with AWS Data services such as S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift. In addition, you should have 3+ years of experience building Data Warehouses on platforms like Snowflake, Redshift, HANA, Teradata, Exasol, among others. Proficiency in Spark, building Delta Lakes using technologies like Apache Hoodie or Databricks, and working with ETL tools and technologies are essential requirements. Furthermore, you should have 3+ years of experience in at least one programming language (Python, R, Scala, Java) and hold a Bachelor's degree in computer science, information technology, data science, data analytics, or a related field. Experience with Agile projects and Agile methodology is highly beneficial for this role.,

Posted 20 hours ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior SQL Developer at our company, you will play a crucial role in our BI & analytics team by expanding and optimizing our data and data queries. Your responsibilities will include optimizing data flow and collection for consumption by our BI & Analytics platform. You should be an experienced data querying builder and data wrangler with a passion for optimizing data systems from the ground up. Collaborating with software developers, database architects, data analysts, and data scientists, you will support data and product initiatives and ensure consistent optimal data delivery architecture across ongoing projects. Your self-directed approach will be essential in supporting the data needs of multiple systems and products. If you are excited about enhancing our company's data architecture to support our upcoming products and data initiatives, this role is perfect for you. Your essential functions will involve creating and maintaining optimal SQL queries, Views, Tables, and Stored Procedures. By working closely with various business units such as BI, Product, and Reporting, you will contribute to developing the data warehouse platform vision, strategy, and roadmap. Understanding physical and logical data models and ensuring high-performance access to diverse data sources will be key aspects of your role. Encouraging the adoption of organizational frameworks through documentation, sample code, and developer support will also be part of your responsibilities. Effective communication of progress and effectiveness of developed frameworks to department heads and managers will be essential. To be successful in this role, you should possess a Bachelor's or Master's degree or equivalent combination of education and experience in a relevant field. Proficiency in T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL, and ETL is required. Experience in creating Tables, Views, and Stored Procedures is crucial. Familiarity with BI and Reporting Platforms, industry trends, and knowledge of multiple database platforms like SQL Server and MySQL are necessary. Proficiency in Source Control and Project Management tools such as Azure DevOps, Git, and JIRA is expected. Experience with SonarQube for clean coding T-SQL practices and DevOps best practices will be advantageous. Applicants must have exceptional written and spoken communication skills and strong team-building abilities to contribute to making strategic decisions and advising senior management on technical matters. With at least 5+ years of experience in a data warehousing position, including working as a SQL Developer, and experience in system development lifecycle, you should also have a proven track record in data integration, consolidation, enrichment, and aggregation. Strong analytical skills, attention to detail, organizational skills, and the ability to mentor junior colleagues will be crucial for success in this role. This full-time position requires flexibility to support different time zones between 12 PM IST to 9 PM IST, Monday through Friday. You will work in a Hybrid Mode and spend at least 2 days working from the office in Hyderabad. Occasional evening and weekend work may be expected based on client needs or job-related emergencies. This job description may not cover all responsibilities and duties, which may change with or without notice.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Engineer at GlobalLogic, you will be responsible for architecting, building, and maintaining complex ETL/ELT pipelines for batch and real-time data processing using various tools and programming languages. Your key duties will include optimizing existing data pipelines for performance, cost-effectiveness, and reliability, as well as implementing data quality checks, monitoring, and alerting mechanisms to ensure data integrity. Additionally, you will play a crucial role in ensuring data security, privacy, and compliance with relevant regulations such as GDPR and local data laws. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Excellent analytical, problem-solving, and critical thinking skills with meticulous attention to detail are essential. Strong communication (written and verbal) and interpersonal skills are also required, along with the ability to collaborate effectively with cross-functional teams. Experience with Agile/Scrum development methodologies is considered a plus. Your responsibilities will involve providing technical leadership and architecture by designing and implementing robust, scalable, and efficient data architectures that align with organizational strategy and future growth. You will define and enforce data engineering best practices, evaluate and recommend new technologies, and oversee the end-to-end data development lifecycle. As a leader, you will mentor and guide a team of data engineers, conduct code reviews, provide feedback, and promote a culture of engineering excellence. You will collaborate closely with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Your role will also involve communicating complex technical concepts and data strategies effectively to both technical and non-technical audiences. At GlobalLogic, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. By joining our team, you will have the chance to work on impactful projects, engage your curiosity and problem-solving skills, and contribute to shaping cutting-edge solutions that redefine industries. With a commitment to integrity and trust, GlobalLogic provides a safe, reliable, and ethical global environment where you can thrive both personally and professionally.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

As a Marketing CRM Automation and Salesforce Marketing Cloud Expert at The Knot Worldwide, you will play a crucial role in developing and implementing dynamic customer journeys, campaigns, and top-notch automation strategies across various CRM touchpoints such as email, push notifications, SMS, and in-app messaging for multiple brands, including The Knot and Wedding Wire. You will work hands-on with the Salesforce Marketing Cloud platform, showcasing your technical expertise in modifying HTML email code, creating email templates, push notifications, SMS messages, and in-app modals, as well as segmenting data using SQL and setting up tracking mechanisms for accurate attribution. Your role will involve collaborating with a creative team to design personalized and automated journeys that enhance the wedding planning experience for our couples. Your responsibilities will include building and overseeing engaging customer journeys, email campaigns, push notifications, SMS campaigns, and in-app automations utilizing tools like Journey Builder, Automation Studio, Email Studio, Data Extensions, Query Segmentation, and AMPScript for personalization. You will conduct A/B testing on complex personalized CRM programs and journeys, analyze customer data from various sources to create targeted segmentation and effective campaign strategies, and work closely with CRM campaign strategy managers to develop data-driven campaigns. Moreover, you will leverage your expertise in Salesforce Marketing Cloud to support campaigns across different business units, stay updated on the latest trends in Salesforce Marketing Cloud and SMS technology, and provide insights on enhancing internal processes. The ideal candidate for this role will be a Salesforce Marketing Cloud expert with proficiency in Journey Builder, Automation Studio, SQL, AMPScript, HTML/CSS, and Email Studio. Additional experience with tools like Power BI, Alteryx, ERP, CRM systems, and data warehouses will be advantageous. You should possess a strong background in business intelligence, marketing analytics, and ecommerce analytics, with a focus on applying analytics in digital commerce or marketing. Successful candidates will have a proven track record of managing large-scale projects and a minimum of 8 years of experience in CRM marketing automation and campaign execution. A minimum of 3-5 years of expertise in Salesforce Marketing Cloud, particularly in Journey Builder, Automation Studio, SQL, AMPScript, HTML, SCSS, and Email Studio, is required. Strong communication skills, a collaborative mindset, the ability to work in a fast-paced environment, and fluency in English are essential for this role.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

The ideal candidate for this position should have 8-12 years of experience and possess a strong understanding and hands-on experience with Microsoft Fabric. You will be responsible for designing and implementing end-to-end data solutions on Microsoft Azure, which includes data lakes, data warehouses, and ETL/ELT processes. Your role will involve developing scalable and efficient data architectures to support large-scale data processing and analytics workloads. Ensuring high performance, security, and compliance within Azure data solutions will be a key aspect of this role. You should have knowledge of various techniques such as lakehouse and warehouse, along with experience in implementing them. Additionally, you will be required to evaluate and select appropriate Azure services like Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks, Unity Catalog, and Azure Data Factory. Deep knowledge and hands-on experience with these Azure Data Services are essential. Collaborating closely with business and technical teams to understand and translate data needs into robust and scalable data architecture solutions will be part of your responsibilities. You should also have experience in data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills are necessary for effective collaboration with cross-functional teams. In this role, you will provide expertise and leadership to the development team implementing data engineering solutions. Working with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements is crucial. Optimizing cloud-based data infrastructure for performance, cost-effectiveness, and scalability will be another key responsibility. Experience in programming languages like SQL, Python, and Scala is required. Hands-on experience with MS SQL Server, Oracle, or similar RDBMS platforms is preferred. Familiarity with Azure DevOps and CI/CD pipeline development is beneficial. An in-depth understanding of database structure principles and distributed data processing of big data batch or streaming pipelines is essential. Knowledge of data visualization tools such as Power BI and Tableau, along with data modeling and strong analytics skills is expected. The candidate should be able to convert OLTP data structures into Star Schema and ideally have DBT experience along with data modeling experience. A problem-solving attitude, self-motivation, attention to detail, and effective task prioritization are essential qualities for this role. At Hitachi, attitude and aptitude are highly valued as collaboration is key. While not all skills are required, experience with Azure SQL Data Warehouse, Azure Data Factory, Azure Data Lake, Azure Analysis Services, Databricks/Spark, Python or Scala, data modeling, Power BI, and database migration are desirable. Designing conceptual, logical, and physical data models using tools like ER Studio and Erwin is a plus.,

Posted 3 days ago

Apply

10.0 - 15.0 years

0 Lacs

kochi, kerala

On-site

As a seasoned leader in the field of Data, AI, and Automation, you will be responsible for packaging business solutions with clear ROI and effectively communicating the benefits to customers. You will play a crucial role in managing the P&L of the practice, leading a team of Architects, Technical Leads, and Engineers, and ensuring the smooth functioning of the entire team. Your ability to create and nurture a high-performing engineering team will be paramount, as you define and execute the organization's vision to be a leader in the Data, AI, and Automation space. You will set KPIs and objectives for the team that align with the organizational goals, and oversee the efficient allocation of personnel across various engagements. In this role, you will be required to design, architect, and implement cutting-edge Data & AI ML solutions and architectures, while adhering to best practices and principles. Your expertise will be utilized to lead initiatives, provide technical guidance, and drive business development in the realm of Data Science, AI, and ML. Additionally, you will be expected to stay abreast of emerging trends and best practices in Data AI and Automation, and leverage your expertise to lead and grow a superior organization of technical talent. Collaborating with peer engineering teams, you will gather technical requirements to shape Data, AI, and Automation technology roadmaps. Your responsibilities will also include enabling sales strategies with Data & AI solutions, representing the company in industry events, nurturing alliances and partnerships, and developing practice branding artifacts that showcase thought leadership. **Mandatory Qualifications:** - Possess over 15 years of experience in working on highly distributed and scalable enterprise applications - Demonstrate a deep understanding of Data warehouses, Big Data, AI ML, and Automation technologies with over 10 years of experience - Knowledge in Generative AI is desirable - Hands-on experience in developing and implementing machine learning algorithms, utilizing relevant programming languages and big data tools - Proficiency in evaluating and selecting algorithms and tools for projects - Extensive experience in advanced ML techniques such as neural networks, deep learning, and reinforcement learning - Familiarity with open-source technologies, ML libraries, and programming languages - Strong understanding of cloud platforms and their services for implementing Big Data and AI ML solutions - Appreciation of CI/CD processes and tools, guiding the team in decision-making related to CI/CD strategies - Experience with Agile development management tools and API/Microservices architecture-based systems development - Excellent communication, leadership, and interpersonal skills - Strong solutioning and presentation skills - Previous experience in Presales and Enterprise systems integration - Ability to envision and develop innovative solutions and accelerators,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will collaborate with stakeholders, including Domain Leads in Operations, IT, and Data, to understand the business needs and shape the vision and roadmap for data-driven initiatives aligned with strategic priorities. You will contribute to the development of the program vision and communicate the product and portfolio vision to your team. Working closely with data scientists, engineers, and designers, you will ensure products are built efficiently, meet user needs, and provide actionable insights. As a Data Product Owner, you will analyze data sources, data technologies, and vendors providing data services to leverage in the data product roadmap development. You will create necessary ER diagrams, data models, PRD/BRD to convey requirements and be accountable for developing and achieving product level KPIs. Managing data products with a moderate degree of strategy, scope, and complexity, you will ensure data accuracy, consistency, and security by establishing data governance frameworks and implementing data management best practices. In this role, you will collaborate with technology and business leadership to align system/application integrations inline with business goals and priorities. You will own and maintain the product backlog, prioritize its contents, and ensure clear, actionable user stories. Additionally, you will set priorities, actively participate in squad/team quarterly planning, and work closely with the agile working group to clarify business requirements, remove roadblocks, and support alignment around product strategy. Monitoring and maintaining the product health, supporting long-term product viability and efficiency, you will balance long and short-term costs with desired outcomes. You will analyze and report on feasibility, cost of delay ramifications, economies, or other aspects of planned or potential changes to the product. Understanding regulatory, compliance, and industry constraints on the product, you will negotiate with internal and external teams to ensure priorities are aligned across squads/teams both within and outside the portfolio. To qualify for this position, you should hold a Bachelor's degree in computer science, Business Administration, or related field, with a Master's degree preferred. You must have a good understanding of data technologies such as databases, data warehouses, and data lakes, along with proven experience of 5+ years as a Data Product Owner, Data Product Manager, or similar role in data or software development. Strong understanding of Agile methodologies, including Scrum and Kanban, and proficiency in programming languages such as Python, R, SQL, or SAS, and cloud technologies like AWS, Azure, are essential. Excellent analytical, problem-solving, decision-making, communication, negotiation, and interpersonal skills are required, along with proficiency in product management tools and the Microsoft Office Suite. Familiarity with UX/UI design principles, software development lifecycle, and software engineering concepts is a plus, as well as experience in insurance, particularly Commercial & Specialty Insurance products. Experience with product management tools such as JIRA, Trello, or Asana, and proficiency in Microsoft Office Suite is preferred. Familiarity with UX/UI design principles, software development lifecycle (SDLC), and software engineering concepts is a plus. Agile practitioner capabilities and experience working with or in Agile teams are highly valued. Strong teamwork, coordination, organization, and planning skills are necessary. Ability to capture complex requirements in a prioritized backlog and managing stakeholders" requirements are vital for success in this role.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a GCP Developer, you will be responsible for maintaining the stability of production platforms, delivering new features, and minimizing technical debt across various technologies. You should have a minimum of 4 years of experience in the field. You must have a strong commitment to maintaining high standards and a genuine passion for ensuring quality in your work. Proficiency in GCP, Python, Hadoop, Spark, Cloud, Scala, Streaming (pub/sub), Kafka, SQL, Data Proc, and Data Flow is essential for this role. Additionally, familiarity with data warehouses, distributed data platforms, and data lakes is required. You should possess knowledge in database definition, schema design, Looker Views, and Models. An understanding of data structures and algorithms is crucial for success in this position. Experience with CI/CD practices would be advantageous. This position involves working in a dynamic environment across multiple locations such as Chennai, Hyderabad, and Bangalore. A total of 20 positions are available for qualified candidates.,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 - 17 Lacs

Hyderabad

Work from Office

Experience: 3-5 years of prior Product Management experience working with data warehouses, clouds, or AdTech platforms. Bachelor’s or Master of Science in Computer Science, Information Systems, Business or related degree Ability to lead demonstrations for technical and non-technical audiences Deep understanding of APIs and how they work/operate Proven ability to create product artifacts, including: Product requirement documents (PRDs),epics, story mapping, OKRs, etc. High-level understanding of the Product Lifecycle (PDLC) Basic understanding of coding and software development understanding Excellent attention to detail Excellent written and verbal communication skills Type S(tartup) personality: smart, ethical, friendly, hard-working and proactive (no exceptions) Great problem solver - you thrive on finding creative solutions to poorly defined problems that impact customers Bonus Points: Prior experience in relational databases, data warehouses and cloud environments Data science principles and knowledge

Posted 5 days ago

Apply

3.0 - 8.0 years

0 Lacs

delhi

On-site

As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As the leader of a team of 5-6 data professionals, you will be responsible for managing delivery across 2-3 client engagements simultaneously. This role requires a combination of technical expertise in data analytics, BI development, and cloud solutions, along with team management and client relationship management responsibilities. In terms of team leadership and delivery, your key responsibilities will include leading and mentoring the data professionals in your team, managing project deliveries for multiple client engagements, establishing project timelines and quality standards, reviewing and approving technical deliverables, monitoring team performance, and providing guidance as needed. You will also be involved in technical solution design and implementation, where you will design data integration solutions using Azure Data Factory and other cloud services, develop complex semantic models and data warehouse solutions, create dashboards and reports using Power BI and Tableau, implement row-level security and data governance standards, establish technical best practices, and ensure adherence to standards. Furthermore, as part of client and stakeholder management, you will be required to build and maintain strong relationships with key stakeholders, lead client meetings and presentations, drive requirement gathering and solution validation, manage project scope and expectations, identify and mitigate project risks, and ensure client satisfaction and delivery quality. To qualify for this role, you should hold a Bachelor's degree in computer science, Statistics, Mathematics, or a related field, along with at least 5 years of experience in data analytics and BI, including a minimum of 2 years in a managerial role. You should have expert-level proficiency in SQL and data modeling, as well as proficiency in data visualization tools such as Tableau and Power BI, and programming languages. Advanced knowledge of DAX for writing complex calculations, strong expertise in Azure cloud services (particularly Azure Data Factory), and proven experience in designing and implementing data warehouses are also required. Additionally, you should possess strong analytical and problem-solving skills, with a focus on data-driven decision-making, experience with data management and statistical analysis techniques, excellent communication and stakeholder management skills, and preferably Power BI certification and Azure certifications (Data Engineer, Solution Architect). Hands-on experience with Python for data analysis would be a plus.,

Posted 6 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

Do you want to help solve the world's most pressing challenges such as feeding the world's growing population and slowing climate change AGCO is looking for individuals to join them in making a difference. Currently, AGCO is seeking a Senior Manager, AI & Data Systems Architecture to lead the design, creation, and evolution of system architectures for AI, analytics, and data systems within the organization. As the Senior Manager, AI & Data Systems Architecture, you will collaborate with cross-functional teams, including data engineers, data scientists, and other IT professionals to create architectures that support cutting-edge AI and data initiatives. Your responsibilities will include leading the end-to-end architecture for AI and data systems, designing and implementing data infrastructure and AI platforms, championing cloud adoption strategies, and driving the continuous improvement and evolution of data and AI architectures to meet emerging business needs and industry trends. To qualify for this role, you should have a minimum of 10 years of experience in data architecture, AI systems, or cloud infrastructure, with at least 3-5 years in a leadership role. You should also possess deep hands-on experience with cloud platforms like AWS, Google Cloud Platform (GCP), and Databricks, as well as familiarity with CRM systems like Salesforce and AI systems within those solutions. Additionally, you should have expertise in data architecture, including data lakes, data warehouses, real-time data streaming, and batch processing frameworks. The ideal candidate will have strong leadership and communication skills, with the ability to drive architecture initiatives in a collaborative and fast-paced environment. A Bachelor's degree in Computer Science, Data Science, or a related field is required, while a Master's degree or relevant certifications such as AWS Certified Solutions Architect are preferred. AGCO offers a positive workplace culture that values inclusion and diversity, providing benefits such as health care and wellness plans, flexible work options, and opportunities for personal development and growth. If you are passionate about leveraging innovative technologies to make a positive impact and contribute to the future of agriculture, apply now to join AGCO in their mission.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a member of the Google Cloud Consulting Professional Services team, you will have the opportunity to contribute to the success of businesses by guiding them through their cloud journey and leveraging Google's global network, data centers, and software infrastructure. Your role will involve assisting customers in transforming their businesses by utilizing technology to connect with customers, employees, and partners. Your responsibilities will include interacting with stakeholders to understand customer requirements and providing recommendations for solution architectures. You will collaborate with technical leads and partners to lead migration and modernization projects to Google Cloud Platform (GCP). Additionally, you will design, build, and operationalize data storage and processing infrastructure using Cloud native products, ensuring data quality and governance procedures are in place to maintain accuracy and reliability. In this role, you will work on data migrations, modernization projects, and design data processing systems optimized for scaling. You will troubleshoot platform/product tests, understand data governance and security controls, and travel to customer sites to deploy solutions and conduct workshops to educate and empower customers. Furthermore, you will be responsible for translating project requirements into goals and objectives, creating work breakdown structures to manage internal and external stakeholders effectively. You will collaborate with Product Management and Product Engineering teams to drive excellence in products and contribute to the digital transformation of organizations across various industries. By joining this team, you will play a crucial role in shaping the future of businesses of all sizes and assisting them in leveraging Google Cloud to accelerate their digital transformation journey.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Job Description: As a Data Modeler at PwC, you will play a crucial role in analyzing business needs, developing long-term data models, and ensuring the efficiency and consistency of data systems. Your expertise in data modeling, metadata management, and data system optimization will contribute to enhancing the overall performance of our data infrastructure. Key responsibilities include: - Analyzing and translating business needs into comprehensive data models. - Evaluating existing data systems and recommending improvements for optimization. - Defining rules to efficiently translate and transform data across various data models. - Collaborating with the development team to create conceptual data models and data flows. - Developing best practices for data coding to maintain consistency within the system. - Reviewing modifications of existing systems for cross-compatibility and efficiency. - Implementing data strategies and developing physical data models to meet business requirements. - Utilizing canonical data modeling techniques to enhance the efficiency of data systems. - Evaluating implemented data systems for variances, discrepancies, and optimal performance. - Troubleshooting and optimizing data systems to ensure seamless operation. Key expertise required: - Strong proficiency in relational and dimensional modeling (OLTP, OLAP). - Experience with data modeling tools such as Erwin, ER/Studio, Visio, PowerDesigner. - Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). - Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. - Familiarity with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). - Experience with ETL processes, data integration, and data governance frameworks. - Excellent analytical, problem-solving, and communication skills. Qualifications: - Bachelor's degree in Engineering or a related field. - 3 to 5 years of experience in data modeling or a related field. - 4+ years of hands-on experience with dimensional and relational data modeling. - Expert knowledge of metadata management and related tools. - Proficiency with data modeling tools such as Erwin, PowerDesigner, or Lucid. - Knowledge of transactional databases and data warehouses. Preferred Skills: - Experience in cloud-based data solutions (AWS, Azure, GCP). - Knowledge of big data technologies (Hadoop, Spark, Kafka). - Understanding of graph databases and real-time data processing. - Certifications in data management, modeling, or cloud data engineering. - Excellent communication and presentation skills. - Strong interpersonal skills to collaborate effectively with various teams.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

pune, maharashtra

On-site

As a Vice President, Solution Architect II at BNY, you will be a key member of the Product and Reference master group within the Data Solution and Services (DSS) Platform team. This role, based in Pune, MH (HYBRID), will involve architecting and delivering solutions that support the consolidation, governance, and management of reference data, product master data, and metrics master data across the firm. You must have significant financial services experience within the banking industry, particularly in domains such as Asset Servicing, Issuer Services, Clearance and Collateral Management, Markets, Structured Debt, Credit Services, and Treasury Services. Your responsibilities will include delivering high-quality architecture and design solutions aligned with BNY policies and standards. You will collaborate with various teams including the architecture community, application development, infrastructure engineering, and key stakeholders to develop integrated technology and business solutions. Additionally, you will lead and develop staff to maximize their impact and professional growth, ensuring consistent, high-quality delivery of architecture and design services. In this role, you will lead high-level architecture for the Product and Reference Masters group within the DSS platform, ensuring compliance with BNY's data governance and regulatory standards. You will design APIs and data access layers to enable bank-wide reference data consumption and management, standardize product catalogs, and oversee authoritative metrics sources. Furthermore, you will collaborate with various teams to deliver scalable, reusable, and compliant solutions aligned with technology domain architecture. To be successful in this role, you should have a Bachelor's degree in computer science or a related discipline, along with 12+ years of experience in technology and architecture roles within distributed technology environments. Your background in securities or financial services, experience with both cloud and on-premise data platforms, and expertise in architecture frameworks and technologies will be beneficial. Strong communication skills, problem-solving abilities, and a passion for driving innovative ideas will also contribute to your success in this role. BNY is committed to diversity and inclusion, and as an Equal Employment Opportunity/Affirmative Action Employer, we welcome applications from underrepresented racial and ethnic groups, females, individuals with disabilities, and protected veterans. Join us at BNY to be part of a culture that empowers you to grow, succeed, and make a difference in the world of finance.,

Posted 1 week ago

Apply

4.0 - 9.0 years

20 - 27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Required Skills Good understanding of MDM core concepts. Knowledge of Web Services, XML, SOAP, REST, integration middleware. Object Oriented software application design and development. Working knowledge of: HTML, JavaScript, PHP, APEX Triggers, JSP/ASP.NET. Working knowledge of DBMS concepts, SQL, and PL/SQL and experience with relational databases DB2, Oracle, SQL Server or others; data warehouses, UML. Should be well versed with understanding of design documents like HLD, LLD etc. and propose solution architecture based on client requirements. Should be self-starter in solution implementation with inputs from design documents Preferred Skills Experience with additional MDM tools beyond Reltio such as Informatica MDM, IBM Initiate, SAP MDG and Oracle MDM. Experience in MDM platform including expertise with MDM data models, matching rules, trust frameworks, complex hierarchy relationship management and data governance. Knowledge of all phases of the Systems Development Life Cycle (SDLC) using Agile practices Location- Hyderabad, Pune, Mumbai, Gurugram, Chennai, Kolkata Education- Bachelors/ Master's degree in Computer Science / MCA / M.Sc / MBA 6-9 years of hands-on experience in Reltio MDM.

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly motivated and experienced Data and Analytics Senior Architect to lead our Master Data Management (MDM) and Data Analytics team. As the Data and Analytics Architect Lead, you will be responsible for defining and implementing the overall data architecture strategy to ensure alignment with business goals and support data-driven decision-making. Your role will involve designing scalable, secure, and efficient data systems, including databases, data lakes, and data warehouses. You will evaluate and recommend tools and technologies for data integration, processing, storage, and analytics while staying updated on industry trends. You will lead a high-performing team, fostering a collaborative and innovative culture, and ensuring data integrity, consistency, and availability across the organization. You will manage the existing MDM solution and data platform based on Microsoft Data Lake Gen 2, Snowflake as the DWH, and Power BI managing data from core applications. Additionally, you will drive further development to handle additional data and capabilities to support our AI journey. The ideal candidate will possess strong leadership skills, a deep understanding of data management and technology principles, and the ability to collaborate effectively across different departments and functions. **Principle Duties and Responsibilities:** **Team Leadership:** - Lead, mentor, and develop a high-performing team of data analysts and MDM specialists. - Foster a collaborative and innovative team culture that encourages continuous improvement and efficiency. - Provide technical leadership and guidance to the development teams and oversee the implementation of IT solutions. **Architect:** - Define the overall data architecture strategy, aligning it with business goals and ensuring it supports data-driven decision-making. - Identify, evaluate, and establish shared enabling technical capabilities for the division in collaboration with IT to ensure consistency, quality, and business value. - Design and oversee the implementation of data systems, including databases, data lakes, and data warehouses, ensuring they are scalable, secure, efficient, and cost-effective. - Evaluate and recommend tools and technologies for data integration, processing, storage, and analytics, staying updated on industry trends. **Strategic Planning:** - Develop and implement the MDM and analytics strategy aligned with the overall team and organizational goals. - Work with the Enterprise architect to align on the overall strategy and application landscape to ensure MDM and data analytics fit into the ecosystem. - Identify opportunities to enhance data quality, governance, and analytics capabilities. **Project Management:** - Oversee project planning, execution, and delivery to ensure timely and successful completion of initiatives. - Monitor project progress and cost, identify risks, and implement mitigation strategies. **Stakeholder Engagement:** - Collaborate with cross-functional teams to understand data needs and deliver solutions that support business objectives. - Serve as a key point of contact for data-related inquiries and support requests. - Develop business cases and proposals for IT investments and present them to senior management and stakeholders. **Data/Information Governance:** - Establish and enforce data/information governance policies and standards to ensure compliance and data integrity. - Champion best practices in data management and analytics across the organization. **Reporting and Analysis:** - Utilize data analytics to derive insights and support decision-making processes. - Document and present findings and recommendations to senior management. **Knowledge, Skills and Abilities Required:** - Bachelor's degree in computer science, Data Science, Information Management, or a related field; master's degree preferred. - 10+ years of experience in data management, analytics, or a related field, with at least 2 years in a leadership role. - Strong knowledge of master data management concepts, data governance, data technology, and analytics tools. - Proficiency in data modeling, ETL processes, database management, big data technologies, and data integration techniques. - Excellent project management skills with a proven track record of delivering complex projects on time and within budget. - Strong analytical, problem-solving, and decision-making abilities. - Exceptional communication and interpersonal skills. - Team player, result-oriented, structured, with attention to detail and a strong work ethic. **Special Competencies required:** - Proven leader with excellent structural skills, good at documenting and presenting. - Strong executional skills to make things happen, not just generate ideas. - Experience in working with analytics tools and data ingestion platforms. - Experience in working with MDM solutions and preferably TIBCO EBX. - Experience in working with Jira/Confluence. **Additional Information:** - Office, remote, or hybrid working. - Ability to function within variable time zones. - International travel may be required.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Analytics focused Senior Software Engineer at PubMatic, you will be responsible for developing advanced AI agents to enhance data analytics capabilities. Your expertise in building and optimizing AI agents, along with strong skills in Hadoop, Spark, Scala, Kafka, Spark Streaming, and cloud-based solutions, will play a crucial role in improving data-driven insights and analytical workflows. Your key responsibilities will include building and implementing a highly scalable big data platform to process terabytes of data, developing backend services using Java, REST APIs, JDBC, and AWS, and building and maintaining Big Data pipelines using technologies like Spark, Hadoop, Kafka, and Snowflake. Additionally, you will design and implement real-time data processing workflows, develop GenAI-powered agents for analytics and data enrichment, and integrate LLMs into existing services for query understanding and decision support. You will work closely with cross-functional teams to enhance the availability and scalability of large data platforms and PubMatic software functionality. Participating in Agile/Scrum processes, discussing software features with product managers, and providing customer support over email or JIRA will also be part of your role. We are looking for candidates with three plus years of coding experience in Java and backend development, solid computer science fundamentals, expertise in developing software engineering best practices, hands-on experience with Big Data tools, and proven expertise in building GenAI applications. The ability to lead feature development, debug distributed systems, and learn new technologies quickly are essential. Strong interpersonal and communication skills, including technical communications, are highly valued. To qualify for this role, you should have a bachelor's degree in engineering (CS/IT) or an equivalent degree from well-known Institutes/Universities. PubMatic employees globally have returned to our offices via a hybrid work schedule to maximize collaboration, innovation, and productivity. Our benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and office perks like healthy snacks, drinks, and catered lunches. About PubMatic: PubMatic is a leading digital advertising platform that provides transparent advertising solutions to publishers, media buyers, commerce companies, and data owners. Our vision is to enable content creators to run a profitable advertising business and invest back into the multi-screen and multi-format content that consumers demand.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing, developing, and maintaining dashboards and reports using Sigma Computing. Your main focus will be on collaborating with business stakeholders to understand data requirements and deliver actionable insights. It will be crucial for you to write and optimize SQL queries that run directly on cloud data warehouses. Additionally, enabling self-service analytics for business users via Sigma's spreadsheet interface and templates will be part of your responsibilities. You will need to apply row-level security and user-level filters to ensure proper data access controls. Furthermore, you will work closely with data engineering teams to validate data accuracy and ensure model alignment. Troubleshooting performance or data issues in reports and dashboards will also be a key aspect of your role. You will be expected to train and support users on Sigma best practices, tools, and data literacy. To excel in this role, you should have at least 5 years of experience in Business Intelligence, Analytics, or Data Visualization roles. Hands-on experience with Sigma Computing is highly preferred. Strong SQL skills and experience working with cloud data platforms such as Snowflake, BigQuery, or Redshift are essential. Familiarity with data modeling concepts and modern data stacks is required. Your ability to translate business requirements into technical solutions will be crucial. Knowledge of data governance, security, and role-based access controls is important. Excellent communication and stakeholder management skills are necessary for effective collaboration. Experience with tools like Looker, Tableau, Power BI, or similar ones will be beneficial for comparative insights. Familiarity with dbt, Fivetran, or other ELT/ETL tools is a plus. Exposure to Agile or Scrum methodologies would also be advantageous.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As an Experienced Senior Data Engineer at Adobe, you will utilize Big Data and Google Cloud technologies to develop large-scale, on-cloud data processing pipelines and data warehouses. Your role will involve consulting with customers worldwide on their data engineering needs around Adobe's Customer Data Platform and supporting pre-sales discussions regarding complex and large-scale cloud data engineering solutions. You will design custom solutions on cloud by integrating Adobe's solutions in a scalable and performant manner. Additionally, you will deliver complex, large-scale, enterprise-grade on-cloud data engineering and integration solutions in a hands-on manner. To be successful in this role, you should have a total of 12 to 15 years of experience, with 3 to 4 years of experience leading Data Engineer teams in developing enterprise-grade data processing pipelines on Google Cloud. You must have led at least one project of medium to high complexity involving the migration of ETL pipelines and Data warehouses to the cloud. Your recent 3 to 5 years of experience should be with premium consulting companies. Profound hands-on expertise with Google Cloud Platform services, especially BigQuery, Dataform, Dataplex, etc., is essential. Exceptional communication skills are crucial for effectively engaging with Data Engineers, Technology, and Business leadership. Furthermore, the ability to leverage knowledge of GCP to other cloud environments is highly desirable. It would be advantageous to have experience consulting with customers in India and possess multi-cloud expertise, with knowledge of AWS and GCP. At Adobe, creativity, curiosity, and continuous learning are valued qualities that contribute to your career growth journey. To pursue a new opportunity at Adobe, ensure to update your Resume/CV and Workday profile, including your unique Adobe experiences and volunteer work. Familiarize yourself with the Internal Mobility page on Inside Adobe to understand the process and set up job alerts for roles that interest you. Prepare for interviews by following the provided tips. Upon applying for a role via Workday, the Talent Team will contact you within 2 weeks. If you progress to the official interview process with the hiring team, inform your manager to support your career growth. At Adobe, you will experience an exceptional work environment recognized globally. You will collaborate with colleagues dedicated to mutual growth through the Check-In approach, where ongoing feedback is encouraged. If you seek to make an impact, Adobe is the ideal place for you. Explore employee career experiences on the Adobe Life blog and discover the meaningful benefits offered. For individuals with disabilities or special needs requiring accommodation to navigate the Adobe.com website or complete the application process, contact accommodations@adobe.com or call (408) 536-3015.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At PwC, the infrastructure team focuses on designing and implementing secure IT systems that support business operations. The primary goal is to ensure the smooth functioning of networks, servers, and data centers to enhance performance and minimize downtime. In the infrastructure engineering role at PwC, you will be tasked with creating robust and scalable technology infrastructure solutions for clients. This will involve working on network architecture, server management, and cloud computing. As a Data Modeler, we are seeking candidates with a solid background in data modeling, metadata management, and data system optimization. Your responsibilities will include analyzing business requirements, developing long-term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise for this role include: - Analyzing and translating business needs into long-term data model solutions. - Evaluating existing data systems and suggesting enhancements. - Defining rules for translating and transforming data across various models. - Collaborating with the development team to create conceptual data models and data flows. - Establishing best practices for data coding to maintain consistency within the system. - Reviewing modifications of existing systems for cross-compatibility. - Implementing data strategies and developing physical data models. - Updating and optimizing local and metadata models. - Utilizing canonical data modeling techniques to improve data system efficiency. - Evaluating implemented data systems for variances, discrepancies, and efficiency. - Troubleshooting and optimizing data systems for optimal performance. - Demonstrating strong expertise in relational and dimensional modeling (OLTP, OLAP). - Using data modeling tools like Erwin, ER/Studio, Visio, PowerDesigner effectively. - Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). - Understanding of NoSQL databases (MongoDB, Cassandra) and their data structures. - Experience with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). - Familiarity with ETL processes, data integration, and data governance frameworks. - Strong analytical, problem-solving, and communication skills. Qualifications for this position include: - A Bachelor's degree in Engineering or a related field. - 5 to 9 years of experience in data modeling or a related field. - 4+ years of hands-on experience with dimensional and relational data modeling. - Expert knowledge of metadata management and related tools. - Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. - Knowledge of transactional databases and data warehouses. Preferred Skills: - Experience in cloud-based data solutions (AWS, Azure, GCP). - Knowledge of big data technologies (Hadoop, Spark, Kafka). - Understanding of graph databases and real-time data processing. - Certifications in data management, modeling, or cloud data engineering. - Excellent communication and presentation skills. - Strong interpersonal skills to collaborate effectively with various teams.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

You are a Senior BI Platform Engineer with over 10 years of experience and specialized knowledge in Tableau, Power BI, Alteryx, and MicroStrategy (MSTR). In this role, you will act as a technical lead and platform administrator for our BI platforms, ensuring consistent performance, providing advanced user support (L3), and engaging with stakeholders. Your responsibilities will also include establishing and managing CI/CD pipelines for BI assets to guarantee scalable, automated, and governed deployment processes. As the platform administrator, you will oversee Tableau, Power BI, Alteryx, and MSTR, managing permissions, data sources, server performance, and upgrades. You will provide Level 3 (L3) support for BI platforms, handling complex technical issues, performing root cause analysis, and troubleshooting at the platform level. Designing, implementing, and maintaining CI/CD pipelines for BI dashboards, dataflows, and platform configurations to facilitate agile development and deployment will also be part of your role. Collaboration with cross-functional teams to gather requirements and ensure the proper implementation of dashboards and analytics solutions is essential. Monitoring and optimizing BI platform performance, usage, and adoption will be key to your success. Working closely with data engineering teams to ensure data quality and availability for reporting needs is also a critical aspect of the role. Your duties will encompass creating and maintaining documentation for governance, support processes, and best practices. You will be responsible for training and mentoring users and junior team members on BI tools and reporting standards. Acting as a liaison between business stakeholders and technical teams to ensure alignment and timely issue resolution is another crucial aspect of the position. Furthermore, you will be tasked with managing all BI upgrades, optimizing the capacity of Power BI gateway, Tableau bridge, Alteryx server, and other BI platforms, as well as enabling new features in each of the BI platforms. Managing licenses optimally, including automated assignments, off-boarding users, and managing licensing, as well as managing RBAC for all BI platforms will also fall under your purview. The qualifications for this role include a minimum of 10 years of experience in a BI support or engineering role. Advanced proficiency in Tableau, Power BI, Alteryx, and MSTR, encompassing administrative functions, troubleshooting, and user support, is required. Demonstrated experience providing L3 support and managing CI/CD pipelines for BI platforms is vital. Strong knowledge of BI architecture, data visualization best practices, and data modeling concepts is essential. Excellent problem-solving and communication skills, with the ability to interact confidently with senior business leaders, are necessary. Experience with SQL, data warehouses, and cloud platforms (e.g., Azure, Snowflake) is preferred. A Bachelor's degree in computer science, Information Systems, or a related field is mandatory. Preferred qualifications include experience with Tableau Server/Cloud, Power BI Service, and MSTR administration, familiarity with enterprise data governance and access control policies, and certifications in Tableau, Power BI, Alteryx, or MSTR are considered advantageous.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is seeking a Data and AI Principal / Senior Manager (Generative AI) for their Noida location. As a global technology company with a workforce of over 218,000 employees in 59 countries, HCLTech specializes in digital, engineering, cloud, and AI solutions. The company collaborates with clients across various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Retail, and Public Services, offering innovative technology services and products. With consolidated revenues of $13.7 billion as of the 12 months ending September 2024, HCLTech aims to drive progress and transformation for its clients globally. Key Responsibilities: In this role, you will be responsible for providing hands-on technical leadership and oversight, including leading the design of AI and GenAI solutions, machine learning pipelines, and data architectures. You will actively contribute to coding, solution design, and troubleshooting critical components, collaborating with Account Teams, Client Partners, and Domain SMEs to ensure technical solutions align with business needs. Additionally, you will mentor and guide engineers across various functions to foster a collaborative and high-performance team environment. As part of the role, you will design and implement system and API architectures, integrating microservices, RESTful APIs, cloud-based services, and machine learning models seamlessly into GenAI and data platforms. You will lead the integration of AI, GenAI, and Agentic applications, NLP models, and large language models into scalable production systems. You will also architect ETL pipelines, data lakes, and data warehouses using tools like Apache Spark, Airflow, and Google BigQuery, and drive deployment using cloud platforms such as AWS, Azure, and GCP. Furthermore, you will lead the design and deployment of machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn, ensuring accurate and reliable outputs. You will develop prompt engineering techniques for GenAI models and implement best practices for ML model performance monitoring and continuous training. The role also involves expertise in CI/CD pipelines, Infrastructure-as-Code, cloud management, stakeholder communication, agile development, performance optimization, and scalability strategies. Required Qualifications: - 15+ years of hands-on technical experience in software engineering, with at least 5+ years in a leadership role managing cross-functional teams in AI, GenAI, machine learning, data engineering, and cloud infrastructure. - Proficiency in Python and experience with Flask, Django, or FastAPI for API development. - Extensive experience in building and deploying ML models using TensorFlow, PyTorch, scikit-learn, and spaCy, and integrating them into AI frameworks. - Familiarity with ETL pipelines, data lakes, data warehouses, and data processing tools like Apache Spark, Airflow, and Kafka. - Strong expertise in CI/CD pipelines, containerization, Infrastructure-as-Code, and API security for high-traffic systems. If you are interested in this position, please share your profile with the required details including Overall Experience, Skills, Current and Preferred Location, Current and Expected CTC, and Notice Period to paridhnya_dhawankar@hcltech.com.,

Posted 2 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies