Predactica is looking for a consultant with extensive test automation skills with experiences in Test Automation using Selenium, Python, SQL & Data Analysis. This is a remote position. Job Responsibilities _________________ Create test automation scripts and leverage tools for validating data and application functionality Build and execute test cases to ensure the integrity of all production solutions deployed on the enterprise data platform including data pipelines, data analytics, data marts and data sharing repositories Identify opportunities for productivity gains through quality engineering process improvements and in the Agile development lifecycle; develop procedures and proposals and lead the implementation of these changes. Isolate, reproduce, manage, and maintain defects and test case databases, and verify fixes. Support user acceptance testing conducted by business partners or end users. Identify opportunities to reduce testing time and effort by automating repeatable tests. Define and champion quality and testing best practices among development teams Qualifications _____________ Over 7 years of professional work experience (or equivalent experience) is required Deep understanding of test automation, scripting and Programming using Python Experience creating test automation frameworks and test cases Expertise in SQL Python experience is a plus Strong knowledge of SDLC process Experience with Enterprise Database technologies like SQL Server, Oracle, Data Integration (Informatica, DataStage), business intelligence, and reporting tools. Demonstrated experience performing functional, end-to-end and regression testing. Show more Show less
Predactica is looking for a consultant with extensive SQL Data analytics and Test Automation skills. This is a remote contract position and is FT. Job Responsibilities _________________ Create test automation scripts and leverage tools for validating data and application functionality Build and execute test cases to ensure the integrity of all production solutions deployed on the enterprise data platform including data pipelines, data analytics, data marts and data sharing repositories Identify opportunities for productivity gains through quality engineering process improvements and in the Agile development lifecycle; develop procedures and proposals and lead the implementation of these changes. Isolate, reproduce, manage, and maintain defects and test case databases, and verify fixes. Support user acceptance testing conducted by business partners or end users. Identify opportunities to reduce testing time and effort by automating repeatable tests. Define and champion quality and testing best practices among development teams Qualifications _____________ Over 7 years of professional work experience (or equivalent experience) is required Expertise in SQL and data analytics Deep understanding of test automation, scripting and Programming using Python Experience creating test automation frameworks and test cases Python experience is a plus Strong knowledge of SDLC process Experience with Enterprise Database technologies like SQL Server, Oracle, Data Integration (Informatica, DataStage), business intelligence, and reporting tools. Demonstrated experience performing functional, end-to-end and regression testing.
We are seeking a highly skilled Generative AI / LLM Engineer with expertise in Large Language Model (LLM) development , Agentic AI frameworks , and strong proficiency in Python , SQL , and cloud platforms (especially GCP) . The ideal candidate will design, develop, and deploy advanced AI models and intelligent agents that solve complex business problems at scale. Key Responsibilities LLM Development & Fine-Tuning – Build, train, and optimize large language models for domain-specific use cases. Agentic AI Solutions – Design and implement autonomous or semi-autonomous AI agents capable of multi-step reasoning, decision-making, and tool orchestration. Generative AI Applications – Create innovative applications leveraging generative AI for text, data extraction, summarization, and reasoning tasks. Data Engineering & Processing – Use Python and SQL to build data pipelines, preprocess datasets, and optimize data storage for LLM workflows. Cloud-Native AI Deployment – Develop and deploy AI solutions using Google Cloud Platform (Vertex AI, BigQuery, AI Platform, Cloud Functions, Pub/Sub) , with experience in Azure or AWS as a plus. Integration & Scalability – Implement AI models into production environments, ensuring scalability, reliability, and cost efficiency. RAG & Vector Search – Build retrieval-augmented generation pipelines using vector databases (Pinecone, Weaviate, FAISS, Milvus, etc.). Research & Innovation – Stay ahead of emerging trends in LLM architectures, prompt engineering, RAG, and agent frameworks. Collaboration – Work closely with cross-functional teams (data engineering, product, cloud architects) to align AI capabilities with business goals. Required Skills & Qualifications Proven experience in Generative AI and LLM development (OpenAI, Anthropic, LLaMA, Mistral, etc.). Hands-on experience with Agentic AI concepts, tools, and frameworks (e.g., LangChain, CrewAI, Semantic Kernel). Strong Python programming skills for AI model development and automation. Solid experience with SQL for data extraction, transformation, and analysis. Strong cloud expertise, especially Google Cloud Platform (Vertex AI, BigQuery, GCS, Cloud Run, Pub/Sub) . Experience with RAG architectures and vector databases. Familiarity with MLOps concepts for deploying and monitoring AI models in production. Strong problem-solving skills and ability to work in fast-paced, innovative environments. Preferred Qualifications Multi-cloud experience (Azure OpenAI, AWS Bedrock, GCP Vertex AI). Knowledge of AI governance, compliance, and responsible AI practices. Experience in integrating AI into enterprise applications via APIs and microservices.