Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
A 100% cloud-based pharma startup in Mumbai is seeking a Senior IT Analytics Specialist to lead data-driven decision-making, AI-powered automation, and cloud-based analytics for various business functions. The role involves utilizing data from ERP, SFA, DMS, LIMS, HRMS, and Chemist Software to derive actionable insights, predictive analytics, and AI-driven forecasting tools specific to pharma operations. The ideal candidate should have practical experience with BI tools, AI/ML adoption, cloud analytics, API integrations, and data governance. They will also collaborate with external vendors to ensure smooth data flow, security, and analytics-driven business intelligence. Key Responsibilities: - Collaborate with AI & Data Science teams to implement real-time analytics. - Develop AI-driven forecasting tools for pharma sales, inventory, and demand planning. - Create and support LLM-powered chatbots for various operational aspects. - Ensure seamless data lake connectivity for advanced cloud analytics and BI tools. - Act as the primary contact for all data analytics vendors and AI partners. - Manage SLAs, contracts, and performance benchmarks for outsourced IT analytics services. - Oversee system performance, data accuracy, and security updates for analytics platforms. - Drive data visualization, define KPIs for each function, and leverage analytics as a business enabler. Desired Candidate Profile: - 6-8 years of experience in IT analytics, cloud BI, and AI-driven decision-making. - Proficiency in data pipelines, ETL workflows, and API integrations for enterprise systems. - Strong knowledge of BI tools (Power BI, Qlik, Tableau), SQL, Python/R for data analytics. - Experience in cloud-based analytics (AWS, Azure, GCP) and data governance. - Familiarity with AI-driven insights, predictive modeling, and NLP-driven analytics tools. Desired Certifications and Qualifications: - Experience in pharma, healthcare, or regulated environments. - Knowledge of data privacy laws (HIPAA, GDPR, DPDP Act India). - Certifications in AWS, Azure, ITIL, CISSP, or AI/ML technologies. - Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.) in IT/ CS/ E&CE or Bachelor of Computer Applications (B.C.A.). This position offers the opportunity to be part of a 100% cloud-based pharma setup within a well-established Indian Conglomerate.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
The Data Engineer role involves building Data Engineering Solutions using cutting-edge data techniques. You will collaborate with product owners, customers, and technologists to deliver data products/solutions in an agile environment. Responsibilities include designing and developing big data solutions, partnering with domain experts, product managers, analysts, and data scientists. You will work on PySpark and Python, build Client pipelines from various sources, ensure automation through CI/CD, define needs for data platform maintainability, testability, performance, security, quality, and usability. Additionally, you will drive implementation of consistent patterns, reusable components, and coding standards for data engineering processes. Converting Talend pipelines into PySpark and Python, tuning Big data applications for optimal performance, evaluating new IT developments, and recommending system enhancements are also part of the role. You should have 4-8 years of IT experience with at least 4 years in PySpark and Python. Experience in designing and developing Data Pipelines, Spark programming, machine learning libraries, containerization technologies, DevOps, and team management is required. Knowledge of Oracle performance tuning, SQL, Autosys, and Unix scripting is also beneficial. The ideal candidate holds a Bachelor's degree or equivalent experience. Please note that this job description provides a summary of the work performed, and additional job-related duties may be assigned as needed.,
Posted 1 week ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Join the Agentforce team in AI Cloud at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our cutting edge new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a subject matter expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and interface with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 12+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills,
Posted 1 week ago
2.0 - 7.0 years
5 - 11 Lacs
Pune
Work from Office
Role: Generative AI Engineer (Contract) Experience: 2+ years in AI/ML development Notice Period - Immediate joiner / upto 30 days Location: Pune (Prefered Candidates From Pune location) Key Responsibilities: Design and deploy Generative AI models (LLMs, diffusion models) on Google Cloud Platform (GCP) using Vertex AI, BigQuery, Cloud Storage. Develop end-to-end AI solutions including data pipelines, APIs, and microservices. Work on prompt engineering to optimize AI model performance. Analyze datasets to uncover insights and optimize business outcomes. Collaborate with cross-functional teams for seamless AI integration. Must-Have Skills: Strong experience in Generative AI, LLMs, diffusion models. Hands-on with Google Cloud Platform (GCP), especially Vertex AI. Proficiency in Python, TensorFlow/PyTorch, Machine Learning/Deep Learning. Experience in Power BI for data visualization. Excellent problem-solving and collaboration skills. Good to Have: MLOps, Docker/Kubernetes, chatbot development experience.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and Inviting applications for the role of Lead Consultant - MLOps Engineer! In this role, you will define, implement and oversee the MLOps strategy for scalable, compliant, and cost-efficient deployment of AI/ GenAI models across the enterprise. This role combines deep DevOps knowledge, infrastructure architecture, and AI platform design to guide how teams build and ship ML models securely and reliably. You will establish governance, reuse, and automation frameworks for AI infrastructure, including Terraform-first cloud automation, multi-environment CI/CD, and observability pipelines. Responsibilities Architect secure, reusable, modular IaC frameworks across cloud and regions for MLOps Lead the development of CI/CD pipelines and standardize deployment frameworks. Design observability and monitoring systems for ML/ GenAI workloads. Collaborate with platform, data science, compliance and Enterprise Architecture teams to ensure scalable ML operations. Define enterprise-wide MLOps architecture and standards (build ? deploy ? monitor) Lead design of GenAI / LLMOps platform (Bedrock/OpenAI/Hugging Face + RAG stack) Integrate governance controls (approvals, drift detection, rollback strategies) Define model metadata standards, monitoring SLAs, and re-training workflows Influence tooling, hiring, and roadmap decisions for AI/ML delivery Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Qualifications we seek in you! Minimum Qualifications Good years of experience in DevOps or MLOps roles. Degree/qualification in Computer Science or a related field, or equivalent work experience Strong Python programming skills. Hands on experience in containerised deployment. Proficient with AWS (SageMaker, Lambda, ECR), Terraform, and Python. Demonstrated experience deploying multiple GenAI systems into production. Hands-on experience deploying 3-4 ML/ GenAI models in AWS. Deep understanding of ML model lifecycle: train ? test ? deploy ? monitor ? retrain. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Knowledge of governance and compliance policies, standards, and procedures Exposure to RAG/LLM workloads and model deployment infrastructure. Experience in developing, testing, and deploying data pipelines Preferred Qualifications/ Skills Experience designing model governance frameworks and CI/CD pipelines. Knowledge of governance and compliance policies, standards, and procedures Advanced understanding of platform security, cost optimization, and ML observability. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Software Test Engineer at Trading Technologies, you will play a crucial role in ensuring the quality and functionality of our cutting-edge trading applications. Your main responsibilities will include designing, developing, and executing test plans and test cases based on software requirements and technical design specifications. You will collaborate with the Development team to investigate and debug software issues, recommend product improvements to the Product Management team, and constantly enhance your skills alongside a team of testers. Your expertise in testing multi-asset trade analytics applications, automated testing using Python or similar programming languages, and experience with cloud-based systems like AWS will be invaluable in this role. Knowledge of trade analytics standards such as pre- and post-Trade TCA, SEC & FINRA rule compliance, MiFID II, and PRIIPs analytics will be highly advantageous. Additionally, your understanding of performance and load testing for SQL queries and data pipelines will be essential. At Trading Technologies, we offer a competitive benefits package to support your well-being and growth. You will have access to medical, dental, and vision coverage, generous paid time off, parental leave, professional development opportunities, and wellness perks. Our hybrid work model allows for a balance between in-office collaboration and remote work, fostering team cohesion, innovation, and mentorship opportunities. Join our forward-thinking and inclusive culture that values diversity and promotes collaborative teamwork. Trading Technologies is a leading Software-as-a-Service (SaaS) technology platform provider in the global capital markets industry. Our TT platform connects to major international exchanges and liquidity venues, offering advanced tools for trade execution, order management, market data solutions, risk management, and more to a diverse client base. Join us in shaping the future of trading technology and delivering innovative solutions to market participants worldwide.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Python API Developer specializing in Product Development, you will leverage your 4+ years of experience to design, develop, and maintain high-performance, scalable APIs that drive our Generative AI products. Your role will involve close collaboration with data scientists, machine learning engineers, and product teams to seamlessly integrate Generative AI models (e.g., GPT, GANs, DALL-E) into production-ready applications. Your expertise in backend development, Python programming, and API design will be crucial in ensuring the successful deployment and execution of AI-driven features. You should hold a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Your professional experience should demonstrate hands-on involvement in designing and developing APIs, particularly with Generative AI models or machine learning models in a production environment. Proficiency in cloud-based infrastructures (AWS, Google Cloud, Azure) and services for deploying backend systems and AI models is essential. Additionally, you should have a strong background in working with backend frameworks and languages like Python, Django, Flask, or FastAPI. Your core technical skills include expertise in Python for backend development using frameworks such as Flask, Django, or FastAPI. You should possess a strong understanding of building and consuming RESTful APIs or GraphQL APIs, along with experience in designing and implementing API architectures. Familiarity with database management systems (SQL/NoSQL) like PostgreSQL, MySQL, MongoDB, Redis, and knowledge of cloud infrastructure (e.g., AWS, Google Cloud, Azure) are required. Experience with CI/CD pipelines, version control tools like Git, and Agile development methodologies is crucial for automating deployments and ensuring efficient backend operations. Key responsibilities will involve closely collaborating with AI/ML engineers to integrate Generative AI models into backend services, handling data pipelines for real-time or batch processing, and engaging in design discussions to ensure technical feasibility and scalability of features. Implementing caching mechanisms, rate-limiting, and queueing systems to manage AI-related API requests, as well as ensuring backend services can handle high concurrency during resource-intensive generative AI processes, will be essential. Your problem-solving skills, excellent communication abilities for interacting with cross-functional teams, and adaptability to stay updated on the latest technologies and trends in generative AI will be critical for success in this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Join us as an Assistant Vice President- Credit Strategy at Barclays, where you will spearhead the development of Credit Strategies for some of the well-known co-brand credit cards. You will utilize industry-leading tools to enhance credit strategies. You may be assessed on key skills relevant for success in the role, such as experience with SAS/SQL, knowledge of Lending products especially Credit cards, understanding of the credit underwriting process, as well as job-specific skillsets. To be successful as an Assistant Vice President - Credit Strategy, you should have experience with: Basic/ Essential Qualifications: - College degree required; quantitative disciplines preferred; master's degree preferred - Analytical experience in the financial services industry; credit card experience preferred - Experience building underwriting and/or line assignment credit strategies - Experience with SAS/SQL or other relevant statistical tools Desirable skillsets/ good to have: - Strong analytical, technical, and statistical skills with a proven ability to process vast amounts of data into meaningful information - Strong knowledge of SQL, SAS, and Excel - Strong communication skills, with the ability to present information clearly, in both written and verbal form - Previous experience in the Financial Services Industry and Credit Risk Management Preferred - Ability to thrive in a dynamic and fast-paced environment Key Accountabilities: - Utilizing segment-level valuations, develop Credit Strategies (Underwriting, Line, and Pricing) for the US Partner Portfolio (50%) - Evaluate alternative data for underwriting and line assignment strategies using statistical analysis techniques such as CHAID Decision Trees, optimization procedures, loss forecasting, enhanced process monitoring, data quality analyses, and incorporate score implementations supporting launch (20%) - Develop and monitor customized manual underwriting process (10%) - Work with Segment / Strategic Analytics / Decision Science staff to ensure project completion within agreed time frames and end-client satisfaction (10%) - Analyze, validate, track, and monitor delivered projects (100%) Stakeholder Management and Leadership: The person in this role will interact with Customer Office, Credit Risk, Finance, Segment teams, Customer Delivery, and Technology teams to ensure the correct implementation of the targeting strategies. Decision-making and Problem Solving: Strong problem-solving skills are required. The person needs to be resourceful with a can-do attitude to independently push tasks forward while working with key stakeholders. Ability to make decisions on the fly when equipped with the right background information and business insights. Risk and Control Objective: Ensure that all activities and duties are carried out in full compliance with regulatory requirements, Enterprise Wide Risk Management Framework, and internal Barclays Policies and Policy Standards. This role will be based out of Noida, India. Purpose of the role: To use innovative data analytics and machine learning techniques to extract valuable insights from the bank's data reserves, leveraging these insights to inform strategic decision-making, improve operational efficiency, and drive innovation across the organization. Accountabilities: - Identification, collection, extraction of data from various sources, including internal and external sources - Performing data cleaning, wrangling, and transformation to ensure its quality and suitability for analysis - Development and maintenance of efficient data pipelines for automated data acquisition and processing - Design and conduct statistical and machine learning models to analyze patterns, trends, and relationships in the data - Development and implementation of predictive models to forecast future outcomes and identify potential risks and opportunities - Collaborate with business stakeholders to seek out opportunities to add value from data through Data Science. Assistant Vice President Expectations: To advise and influence decision-making, contribute to policy development, and take responsibility for operational effectiveness. Collaborate closely with other functions/business divisions. Lead a team performing complex tasks, using well-developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
At GoDaddy, the future of work varies for each team. Some work in the office full-time, some have a hybrid arrangement, while others work entirely remotely. As a Director of AI/ML, you will architect and drive GoDaddy's three-year strategic roadmap for AI and Machine Learning solutions, focusing on new customer acquisition through groundbreaking Agentic AI and sophisticated ML techniques. In this hybrid position, you will split your time between remote work from home and office work, living within commuting distance. Hybrid teams may work in-office a few times a week or as little as once a month, as determined by leadership. You will be responsible for developing a strategic vision and roadmap for AI/ML applications, showcasing visionary leadership in Agentic AI, ensuring end-to-end system integration, architecting scalable global solutions, fostering multi-functional collaboration, staying updated on AI/ML advancements, and mentoring high-performing teams. Your experience should include 8+ years of leadership in AI/ML roles, expertise in Agentic AI and autonomous systems, experience in designing and scaling global AI/ML systems, proficiency in industry-standard ML tools and frameworks, exceptional leadership and communication skills, and the ability to think strategically in a fast-paced environment. An advanced degree in Computer Science, Machine Learning, or Artificial Intelligence would be advantageous. Join us in our mission to deploy AI/ML at enterprise scale, create exceptional value for customers, and maintain GoDaddy's competitive edge in the industry.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a Product Owner at PitchBook, you will collaborate with key stakeholders and teams to deliver the department's product roadmap. Your role will involve aligning engineering activities with product objectives for new product capabilities, data, and scaling improvements, focusing on AI/ML data extraction, collection, and enrichment capabilities. Working within the Data Technology team, you will develop solutions to support and accelerate data operations processes, impacting core workflows of data capture, ingestion, and hygiene across private and public capital markets datasets. You will work on AI/ML Collections Data Extraction & Enrichment teams, closely integrated with Engineering and Product Management to ensure alignment with the Product Roadmap. Your responsibilities will include being a domain expert for your product area(s), defining backlog priorities, managing feature delivery according to the Product Roadmap, validating requirements, creating user stories and acceptance criteria, communicating with stakeholders, defining metrics for team performance, managing risks or blockers, and supporting AI/ML collections work. To be successful in this role, you should have a Bachelor's degree in Information Systems, Engineering, Data Science, Business Administration, or a related field, along with 3+ years of experience as a Product Manager or Product Owner within AI/ML or enterprise SaaS domains. You should have a proven track record of shipping high-impact data pipeline or data collection-related tools and services, familiarity with AI/ML workflows, experience collaborating with globally distributed teams, excellent communication skills, a bias for action, and attention to detail. Preferred qualifications include direct experience with applied AI/ML Engineering services, a background in fintech, experience with data quality measurements and ML model evaluation, exposure to cloud-based ML infrastructure and data pipeline orchestration tools, and certifications related to Agile Product Ownership / Product Management. This position offers a standard office setting with the use of a PC and phone throughout the day, collaboration with stakeholders in Seattle and New York, and limited corporate travel may be required. Morningstar's hybrid work environment allows for remote work and in-person collaboration, with benefits to enhance flexibility. Join us at PitchBook to engage meaningfully with global colleagues and contribute to our values and vision.,
Posted 1 week ago
2.0 - 7.0 years
5 - 8 Lacs
Noida
Work from Office
Develop and support ETL pipelines using Snowflake, ADF, Databricks, Python. Manage data quality, model design, Kafka/Airflow orchestration, and troubleshoot production issues in Agile teams. Required Candidate profile 1.5–3 yrs in ETL, Snowflake, ADF, Python, SQL. Knowledge of Databricks, Airflow, Kafka preferred. Bachelor's in CS/IT. Experience in data governance and cloud platforms is a plus.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
maharashtra
On-site
We are seeking a passionate and motivated Junior Python Data Scientist to join our expanding data team. This position offers an excellent opportunity for recent graduates or individuals with up to one year of experience who are enthusiastic about applying their Python and data analysis skills to real-world challenges. As a Junior Python Data Scientist, you will be involved in meaningful projects focused on data preparation, analysis, and the development of machine learning models, all under the guidance of experienced data scientists. Your responsibilities will include cleaning, transforming, and analyzing data utilizing Python, pandas, and NumPy. You will also play a key role in supporting the development and evaluation of machine learning models using tools such as scikit-learn and TensorFlow. Additionally, you will conduct statistical analyses to extract valuable insights and identify trends, as well as contribute to building data pipelines and automating data processes. Communication of findings and presentation of insights in a clear and concise manner will be an essential part of your role. Collaboration with cross-functional teams, including data engineers, product managers, and software developers, is also a key aspect of this position. The ideal candidate for this role should have at least one year of hands-on experience with Python for data analysis or machine learning. Familiarity with pandas, NumPy, scikit-learn, and TensorFlow is required. A solid understanding of core statistical concepts and basic exploratory data analysis (EDA) is essential. Knowledge of machine learning models such as linear regression, decision trees, or classification algorithms is preferred, and exposure to advanced ML algorithms and Deep Learning is a plus. Strong problem-solving and analytical thinking skills, along with good communication and documentation abilities, are also important qualities we are looking for. A Bachelor's degree in Computer Science, Data Science, Statistics, Mathematics, or a related field is required, or currently pursuing one. Preferred qualifications include completed coursework, certifications, or personal projects related to ML or Data Science. Exposure to version control (e.g., Git), Jupyter notebooks, or cloud environments is advantageous. We are looking for candidates who are enthusiastic about learning and growing in the data science field.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
The position offers you the opportunity to choose your preferred working location from Pune, Maharashtra, India; Gurugram, Haryana, India; Bengaluru, Karnataka, India; Hyderabad, Telangana, India. As a candidate, you should possess a Bachelor's degree in Computer Science or a related technical field, or equivalent practical experience along with 3 years of experience in building data and Artificial Intelligence (AI) solutions and collaborating with technical customers. Additionally, you should have experience in developing cloud enterprise solutions and supporting customer projects till completion. It would be advantageous if you have experience working with Large Language Models, data pipelines, and various data analytics and visualization techniques. Proficiency in Data Extract, Transform, and Load (ETL) techniques is desirable. Knowledge and experience in Large Language Models (LLMs) to deploy multimodal solutions involving Text, Image, Video, and Voice will be beneficial. Familiarity with data warehousing concepts, including technical architectures, infrastructure components, and investigative tools like Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume, is preferred. Understanding of cloud computing, virtualization, multi-tenant cloud infrastructures, storage systems, and content delivery networks will be an added advantage. Strong communication skills are essential for this role. As part of the Google Cloud Consulting Professional Services team, you will assist customers in navigating crucial moments in their cloud journey to drive business growth. Working in a dynamic environment, you will contribute to shaping the future of businesses by leveraging Google's global network, data centers, and software infrastructure. Your responsibilities will include designing and implementing solutions for customer use cases using core Google products, identifying transformation opportunities with Generative AI (GenAI), and conducting workshops to educate customers on the potential of Google Cloud. You will have access to Google's technology to monitor application performance, troubleshoot issues, and address customer needs, ensuring a quality experience with the Google Cloud Generative AI (GenAI) and Agentic AI suite of products. Key responsibilities will involve delivering big data and GenAI solutions, acting as a trusted technical advisor to customers, identifying product features and gaps, collaborating with Product Managers and Engineers to influence the Google Cloud Platform roadmap, and providing best practices recommendations through tutorials, blog articles, and technical presentations tailored to different levels of business and technical stakeholders.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,
Posted 2 weeks ago
14.0 - 18.0 years
0 Lacs
pune, maharashtra
On-site
You are hiring for the position of AVP - Databricks with a minimum of 14 years of experience. The role is based in Bangalore/Hyderabad/NCR/Kolkata/Mumbai/Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure solutions are designed, developed, and implemented according to client requirements and industry standards. You will act as the subject matter expert on Databricks, providing guidance on architecture, implementation, and optimization to teams. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads is also a key aspect of the role. You will serve as the primary point of contact for clients to ensure alignment between business requirements and technical delivery. The qualifications we seek in you include a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). You should have relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred qualifications/skills for this role include proven experience in leading end-to-end delivery, solution and architecture of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desirable. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a plus. Expertise in data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You are a Software Engineer (Data SaaS Product) who will be responsible for building robust and scalable systems, with the flexibility to work on UI or scraping tasks when necessary. You should have at least 2 years of experience in either product development or data engineering. ReactJS or UI experience and the ability to build data pipelines from scratch are advantageous but not mandatory. Your willingness to learn and contribute where needed is valued more than having expertise in every area. In this role, you will own and enhance the full stack of data pipelines, from ingestion to processing to delivery. You will design, develop, and manage scalable data infrastructure in a cloud environment, writing efficient, production-ready code primarily in Python or a similar scripting language. Collaborating closely with both engineers and business teams, you will translate real-world issues into data-driven solutions and participate in ReactJS/UI feature development and bug fixes. Additionally, there may be opportunities to build and maintain scraping systems for custom data needs. The ideal candidate will have a minimum of 2 years of experience in building and scaling data-intensive products, a strong command of SQL for optimizing complex queries, proficiency in at least one scripting language (e.g., Python, Go), familiarity with UI development using ReactJS, experience with cloud platforms (AWS, GCP, etc.), and a track record of enhancing or maintaining data pipelines end-to-end. Knowledge of scraping tools and techniques is a plus. You should demonstrate a proactive mindset, taking ownership and driving initiatives forward, possess strong communication skills for effective idea conveyance and cross-functional collaboration, and be growth-oriented and comfortable in a dynamic and challenging environment. By joining this team, you will work with a motivated group addressing real challenges in the Airbnb investment sector. You can expect high autonomy, direct impact, and close collaboration with the founder, as well as the opportunity to influence key systems and architecture. Equity may be offered after the initial 2-month review based on performance. The interview process includes an initial screening with the founder, a 30-minute technical interview via video call, and a final on-site technical interview with the tech lead at the Bangalore office.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Data Engineer specializing in Databricks, you will play a crucial role in designing, developing, and optimizing our next-generation data platform. Your responsibilities will include leading a team of data engineers, offering technical guidance, mentorship, and ensuring the scalability and high performance of data solutions. You will be expected to lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. It will also be part of your role to define and enforce data engineering best practices, coding standards, and architectural patterns. Additionally, providing technical guidance and mentorship to junior and mid-level data engineers, conducting code reviews, and ensuring the quality, performance, and maintainability of data solutions will be key aspects of your job. Your expertise in Databricks will be essential as you architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimizing Spark workloads for performance and cost efficiency on Databricks, developing and managing Databricks notebooks, jobs, and workflows, and proficiently using Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics will be part of your daily tasks. In terms of pipeline development and operations, you will need to develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources like relational databases, APIs, and streaming data. Implementing monitoring, alerting, and logging for data pipelines to ensure operational excellence, as well as troubleshooting and resolving complex data-related issues, will also fall under your responsibilities. Collaboration and communication are crucial aspects of this role as you will work closely with cross-functional teams, including product managers, data scientists, and software engineers. Clear communication of complex technical concepts to both technical and non-technical stakeholders is vital. Staying updated with industry trends and emerging technologies in data engineering and Databricks will also be expected. Key Skills required for this role include extensive hands-on experience with the Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs, understanding Spark architecture, experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics, and a deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures are essential for success in this position.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
The team you would be joining focuses on end-to-end development of features for a major Technology company. This project involves collaboration with various teams working on different aspects of the search features development process. In this role, your responsibilities will include extracting data from external websites using Python and other internal applications, and then ingesting this data into the Client's databases. You will also be involved in Data Modelling, Schema Creation/Maintenance, and creating and maintaining data pipelines using internal applications with a focus on Python. Furthermore, you will engage in tasks such as data analysis, data visualization, creating SQL reports, dashboards, and configuring business logic to display information. You will also be responsible for debugging issues during maintenance, ensuring that features are triggering correctly, data pipelines are functioning without failure, and SQL reports and dashboards are operational. Additionally, you will be expected to address any missing information related to the verticals you are working on. To qualify for this role, you should have at least 10 years of experience leading large, highly complex technical programs with significant scope, budget, and a large pool of resources. Prior software development experience is a must, along with proven experience in leading multiple programs from initiation through completion. Strong communication and collaboration skills are essential, including the ability to communicate effectively with executives at all levels of the organization, create audience-appropriate presentations, and work seamlessly across stakeholder groups with potentially conflicting interests. In terms of technical requirements, you should have knowledge of ETL (Extract, Transform, Load) Systems, proficiency in at least one programming language (preferably Python), relational databases, and web services, as well as experience working with Linux environments. Continuous improvement is a key aspect of this role, requiring you to write and maintain custom scripts to increase system efficiency, document manual and automated processes, and identify and resolve problem areas in the existing environment. Program management skills are also crucial, with a focus on budgeting, profitability, team management, stakeholder management, and ensuring high-quality program architecture and code style. As a leader in this role, you will be responsible for overseeing a team of engineers and analysts working on the product roadmap, ensuring software architecture and code quality, defining best practices and coding standards, making architecture and design decisions, and maintaining the stability and performance of the service you are working on. Collaboration with stakeholders to complete projects on time and contributing to the long-term strategy in your area of expertise will also be part of your responsibilities. At GlobalLogic, we offer a culture of caring where people come first, opportunities for continuous learning and development, interesting and meaningful work with impactful solutions, balance and flexibility to achieve work-life harmony, and a high-trust organization that values integrity and ethical practices. Join us at GlobalLogic, a Hitachi Group Company, and be part of our commitment to engineering impact and creating innovative digital products and experiences for our clients worldwide.,
Posted 2 weeks ago
3.0 - 8.0 years
5 - 8 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Data Analysis & Interpretation : Analyze large datasets to uncover trends, patterns, and insights that support business objectives. Perform statistical analysis and data cleaning to ensure data quality and integrity. Translate complex data findings into clear, actionable insights for business stakeholders. Machine Learning & Predictive Modeling : Build and deploy machine learning models to predict future trends or outcomes. Work with supervised and unsupervised learning techniques to solve complex business problems. Fine-tune models for optimization , and improve model accuracy over time. Data Visualization & Reporting : Develop interactive data visualizations using tools like Tableau , Power BI , or custom-built dashboards to present insights in an accessible format. Prepare and present regular reports on key metrics and insights for senior management and other stakeholders. Collaboration & Cross-functional Support : Collaborate with other teams (engineering, product, marketing, etc.) to define data-driven strategies and business requirements. Work closely with software engineers to integrate machine learning models into production systems and pipelines. Data Collection & Data Pipelines : Design and implement data collection methods , including working with APIs, databases, and big data tools. Develop, optimize, and maintain ETL processes (Extract, Transform, Load) for clean and usable data. Research & Innovation : Stay updated on the latest trends in data science , machine learning , and artificial intelligence (AI). Experiment with new algorithms and techniques to improve model performance and data insights. Business Problem Solving : Work closely with stakeholders to understand business challenges and translate them into data science projects. Provide actionable recommendations based on data findings to help improve business processes and decision-making.
Posted 2 weeks ago
4.0 - 9.0 years
3 - 11 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for Senior Software Engineer Data Engineer (AI Solutions) , You'll make an impact by Design, build, and maintain data pipelines that serve the needs of multiple stakeholders including software developers, data scientists, analysts, and business teams. Ensure data pipelines are modular, resilient, and optimized for performance and low maintenance. Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. Establish robust data validation, logging, and monitoring strategies to maintain data quality and lineage. Optimize data infrastructure for scalability, cost- efficiency, and observability in cloud-based environments. Ensure compliance with governance policies and data access controls across projects. Use your skills to move the world forward! Bachelor's degree in Computer Science, Information Systems, or a related field. 4+ years of experience designing and deploying scalable data pipelines in cloud environments. Proficiency in Python, SQL, and data manipulation tools and frameworks (e.g., Apache Airflow, Spark, dbt, Pandas). Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). Strong understanding of data modeling, schema design, and data transformation patterns. Experience working with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). Exposure to building data solutions that serve AI/ML pipelines, including feature stores and real-time data ingestion. Familiarity with observability, data versioning, and pipeline testing tools. Experience engaging with diverse stakeholders, gathering data requirements, and supporting iterative development cycles. Background or familiarity with the Power, Energy, or Electrification sector is a strong plus. Knowledge of security best practices and data compliance policies for enterprise-grade systems.
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At PwC, our team in managed services specializes in providing outsourced solutions and supporting clients across various functions. We help organizations enhance their operations, reduce costs, and boost efficiency by managing key processes and functions on their behalf. Our expertise lies in project management, technology, and process optimization, allowing us to deliver high-quality services to our clients. In managed service management and strategy at PwC, the focus is on transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As a Managed Services - Data Engineer Senior Associate at PwC, you will be part of a team of problem solvers dedicated to addressing complex business issues from strategy to execution using Data, Analytics & Insights Skills. Your responsibilities will include using feedback and reflection to enhance self-awareness and personal strengths, acting as a subject matter expert in your chosen domain, mentoring junior resources, and conducting knowledge sharing sessions. You will be required to demonstrate critical thinking, ensure quality of deliverables, adhere to SLAs, and participate in incident, change, and problem management. Additionally, you will be expected to review your work and that of others for quality, accuracy, and relevance, as well as demonstrate leadership capabilities by working directly with clients and leading engagements. The primary skills required for this role include ETL/ELT, SQL, SSIS, SSMS, Informatica, and Python, with secondary skills in Azure/AWS/GCP, Power BI, Advanced Excel, and Excel Macro. As a Data Ingestion Senior Associate, you should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines, designing and implementing ETL processes, monitoring and troubleshooting data pipelines, implementing data security measures, and creating visually impactful dashboards for data reporting. You should also have expertise in writing and analyzing complex SQL queries, be proficient in Excel, and possess strong communication, problem-solving, quantitative, and analytical abilities. In our Managed Services platform, we focus on leveraging technology and human expertise to deliver simple yet powerful solutions to our clients. Our team of skilled professionals, combined with advanced technology and processes, enables us to provide effective outcomes and add greater value to our clients" enterprises. We aim to empower our clients to focus on their business priorities by providing flexible access to world-class business and technology capabilities that align with today's dynamic business environment. If you are a candidate who thrives in a high-paced work environment, capable of handling critical Application Evolution Service offerings, engagement support, and strategic advisory work, then we are looking for you to join our team in the Data, Analytics & Insights Managed Service at PwC. Your role will involve working on a mix of help desk support, enhancement and optimization projects, as well as strategic roadmap initiatives, while also contributing to customer engagements from both a technical and relationship perspective.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
We are seeking a highly skilled AI/ML Engineer to join our team. As an AI/ML Engineer, you will be responsible for designing, implementing, and optimizing machine learning solutions, encompassing traditional models, deep learning architectures, and generative AI systems. Your role will involve collaborating with data engineers and cross-functional teams to create scalable, ethical, and high-performance AI/ML solutions that contribute to business growth. Your key responsibilities will include developing, implementing, and optimizing AI/ML models using both traditional machine learning and deep learning techniques. You will also design and deploy generative AI models for innovative business applications, in addition to working closely with data engineers to establish and maintain high-quality data pipelines and preprocessing workflows. Integrating responsible AI practices to ensure ethical, explainable, and unbiased model behavior will be a crucial aspect of your role. Furthermore, you will be expected to develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Your expertise will be essential in optimizing large language models (LLMs) for efficient inference, memory usage, and performance. Collaboration with product managers, data scientists, and engineering teams to seamlessly integrate AI/ML into core business processes will also be part of your responsibilities. Rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness are essential aspects of this role. To be successful in this position, you must possess a strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks is required. Proficiency in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker is also necessary. Experience in deploying generative AI solutions, understanding responsible AI concepts, solid experience with MLOps pipelines, and proficiency in optimizing transformer models or LLMs for production workloads are key qualifications for this role. Additionally, familiarity with cloud services (AWS, GCP, Azure), containerized deployments (Docker, Kubernetes), as well as excellent problem-solving and communication skills are essential. Ability to work collaboratively with cross-functional teams is also a crucial requirement. Preferred qualifications include experience with data versioning tools like DVC or LakeFS, exposure to vector databases and retrieval-augmented generation (RAG) pipelines, knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs, familiarity with Agile workflows and sprint-based delivery, and contributions to open-source AI/ML projects or published papers in conferences/journals. Join our team at Lucent Innovation, an India-based IT solutions provider, and enjoy a work environment that promotes work-life balance. With a focus on employee well-being, we offer 5-day workweeks, flexible working hours, and a range of indoor/outdoor activities, employee trips, and celebratory events throughout the year. At Lucent Innovation, we value our employees" growth and success, providing in-house training, as well as quarterly and yearly rewards and appreciation. Perks: - 5-day workweeks - Flexible working hours - No hidden policies - Friendly working environment - In-house training - Quarterly and yearly rewards & appreciation,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
The company Exela, a global leader in business process automation (BPA), is seeking a skilled Python Developer with a strong programming background and a fundamental understanding of AI concepts. The ideal candidate will be responsible for designing, developing, testing, and maintaining Python applications and services. They will also collaborate with data teams on integration and automation tasks. Key Responsibilities: - Design, develop, test, and maintain Python applications and services - Utilize REST APIs, data pipelines, and automation scripts - Work on LLM integration, LangGraph, LangChain, preprocessing, and reporting - Write clean, maintainable, and efficient code - Conduct unit testing and participate in code reviews - Optimize performance and troubleshoot production issues Qualifications: - Strong proficiency in Python, including OOP, standard libraries, file I/O, and error handling - Experience with frameworks such as Flask, FastAPI, or Django - Basic understanding of pandas, NumPy, and data manipulation - Familiarity with Git and version control best practices - Experience working with JSON, CSV, and APIs If you are a motivated individual with a passion for Python development and AI technologies, this position offers an exciting opportunity to work on cutting-edge projects in a dynamic and collaborative environment. Join Exela's team of professionals and contribute to the success of our innovative solutions.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our data engineering team as an experienced Python + Databricks Developer. Your role will involve designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions is a key part of this role. Additionally, you will optimize and tune data pipelines for performance and cost efficiency and implement data validation, quality checks, and monitoring. Working with cloud platforms, preferably Azure or AWS, to manage data workflows will also be part of your responsibilities. Ensuring best practices in code quality, version control, and documentation is essential for this role. To be successful in this position, you should have at least 5 years of professional experience in Python development and 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, particularly PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines is necessary, along with a solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools would be advantageous. Familiarity with version control tools like Git is also desired. Excellent problem-solving and communication skills are important for this role.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
Myers-Holum is expanding the NSAW Practice and is actively seeking experienced Enterprise Architects with strong end-to-end data warehousing and business intelligence experience to play a pivotal role leading client engagements on this team. As an Enterprise Architect specializing in Data Integration and Business Intelligence, you will be responsible for leading the strategic design, architecture, and implementation of enterprise data solutions to ensure alignment with clients" long-term business goals. You will have the opportunity to develop and promote architectural visions for data integration, Business Intelligence (BI), and analytics solutions across various business functions and applications. Leveraging cutting-edge technologies such as the Oracle NetSuite Analytics Warehouse (NSAW) platform, NetSuite ERP, Suite Commerce Advanced (SCA), and other cloud-based and on-premise tools, you will design and build scalable, high-performance data warehouses and BI solutions for clients. In this role, you will lead cross-functional teams in developing data governance frameworks, data models, and integration architectures to facilitate seamless data flow across disparate systems. By translating high-level business requirements into technical specifications, you will ensure that data architecture decisions align with broader organizational IT strategies and compliance standards. Additionally, you will architect end-to-end data pipelines, integration frameworks, and governance models to enable the seamless flow of structured and unstructured data from multiple sources. Your responsibilities will also include providing thought leadership in evaluating emerging technologies, tools, and best practices for data management, integration, and business intelligence. You will oversee the deployment and adoption of key enterprise data initiatives, engage with C-suite executives and senior stakeholders to communicate architectural solutions, and lead and mentor technical teams to foster a culture of continuous learning and innovation in data management, BI, and integration. Furthermore, as part of the MHI team, you will have the opportunity to contribute to the development of internal frameworks, methodologies, and standards for data architecture, integration, and BI. By staying up to date with industry trends and emerging technologies, you will continuously evolve the enterprise data architecture to meet the evolving needs of the organization and its clients. To qualify for this role, you should possess 10+ years of relevant professional experience in data management, business intelligence, and integration architecture, along with 6+ years of experience in designing and implementing enterprise data architectures. You should have expertise in cloud-based data architectures, proficiency in data integration tools, experience with relational databases, and a strong understanding of BI platforms. Additionally, you should have hands-on experience with data governance, security, and compliance frameworks, as well as exceptional communication and stakeholder management skills. Joining Myers-Holum as an Enterprise Architect offers you the opportunity to collaborate with curious and thought-provoking minds, shape your future, and positively influence change for clients. You will be part of a dynamic team that values continuous learning, growth, and innovation, while providing stability and growth opportunities within a supportive and forward-thinking organization. If you are ready to embark on a rewarding career journey with Myers-Holum and contribute to the evolution of enterprise data architecture, we invite you to explore the possibilities and discover your true potential with us.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough