Noida, Uttar Pradesh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep HTML/CSS expertise to review evaluations completed by data annotators assessing AI-generated HTML/CSS code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated HTML/CSS code. Assess if the HTML/CSS code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in HTML/CSS development, QA, or code review. Strong knowledge of HTML/CSS syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your HTML/CSS expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep JavaScript expertise to review evaluations completed by data annotators assessing AI-generated JavaScript code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated JavaScript code. Assess if the JavaScript code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in JavaScript development, QA, or code review. Strong knowledge of JavaScript syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Python expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Python expertise to review evaluations completed by data annotators assessing AI-generated Python code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Python code. Assess if the Python code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Python development, QA, or code review. Strong knowledge of Python syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $25 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Python expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep TypeScript expertise to review evaluations completed by data annotators assessing AI-generated TypeScript code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated TypeScript code. Assess if the TypeScript code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in TypeScript development, QA, or code review. Strong knowledge of TypeScript syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your TypeScript expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep HTML/CSS expertise to review evaluations completed by data annotators assessing AI-generated HTML/CSS code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated HTML/CSS code. Assess if the HTML/CSS code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in HTML/CSS development, QA, or code review. Strong knowledge of HTML/CSS syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your HTML/CSS expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Delhi, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep HTML/CSS expertise to review evaluations completed by data annotators assessing AI-generated HTML/CSS code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated HTML/CSS code. Assess if the HTML/CSS code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in HTML/CSS development, QA, or code review. Strong knowledge of HTML/CSS syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your HTML/CSS expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Delhi, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep TypeScript expertise to review evaluations completed by data annotators assessing AI-generated TypeScript code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated TypeScript code. Assess if the TypeScript code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in TypeScript development, QA, or code review. Strong knowledge of TypeScript syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your TypeScript expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Python expertise to review evaluations completed by data annotators assessing AI-generated Python code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Python code. Assess if the Python code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Python development, QA, or code review. Strong knowledge of Python syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $25 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Python expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Noida, Uttar Pradesh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Noida, Uttar Pradesh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C# expertise to review evaluations completed by data annotators assessing AI-generated C# code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C# code. Assess if the C# code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C# development, QA, or code review. Strong knowledge of C# syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C# expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Noida, Uttar Pradesh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C++ expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C++ code. Assess if the C++ code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C++ development, QA, or code review. Strong knowledge of C++ syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C++ expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Pune, Maharashtra, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C++ expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C++ code. Assess if the C++ code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C++ development, QA, or code review. Strong knowledge of C++ syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C++ expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Pune, Maharashtra, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C# expertise to review evaluations completed by data annotators assessing AI-generated C# code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C# code. Assess if the C# code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C# development, QA, or code review. Strong knowledge of C# syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C# expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Pune, Maharashtra, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C# expertise to review evaluations completed by data annotators assessing AI-generated C# code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C# code. Assess if the C# code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C# development, QA, or code review. Strong knowledge of C# syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C# expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Chandigarh, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C++ expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C++ code. Assess if the C++ code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C++ development, QA, or code review. Strong knowledge of C++ syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C++ expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C# expertise to review evaluations completed by data annotators assessing AI-generated C# code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C# code. Assess if the C# code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C# development, QA, or code review. Strong knowledge of C# syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C# expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C++ expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C++ code. Assess if the C++ code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C++ development, QA, or code review. Strong knowledge of C++ syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C++ expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Delhi, India
Not disclosed
Remote
Contractual
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
My Connections SME Work
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.